I have a GKE cluster which uses Nginx Ingress Controller as its ingress engine. Currently, when I setup the Nginx Ingress Controller I define a service kind: LoadBalancer and point it to an external static IP previously reserved on GCP. The problem with that is it only binds to a regional static IP address (L4 Load Balancer if I'm not mistaken). I want to have a Global Load Balancer instead.
I know that I can achieve that by using GKE ingress controller instead of Nginx Ingress Controller. But I still want to use Nginx Ingress due to its powerful annotations like rewriting headers based on conditions etc; things not available for GKE Ingress annotations.
Finally, is there any way to combine a Global Load Balancer with nginx ingress controller or put an Global Load Balancer in front of a L4 Load Balancer created By Nginx?
We need to have Global Load Balancer in order to be protected by Cloud Armor.
I finally managed to make Nginx Ingress Controller and L7 HTTP(S) Load Balancer work together.
Based on the #rrob repply with his own question I managed to make it work. The only difference is that his solution will install a classic HTTP(S) LoadBalancer, instead of the new version and also I cover the creation of the IP Address, the self-signed Certificate, and the HTTP Proxy redirect from HTTP to HTTPS. I will plcae here the detailed steps that worked for me.
This steps assume we already have a Cluster created with VPC-native traffic routing enabled.
Before the need of the HTTP(S) LoadBalancer, I would just apply the manifests provided by the NGINX DOCS page for the installation of the Nginx Ingress Controller and It would create a service of type LoadBalancer which would, then, create a regional L4 LoadBalancer automatically.
But now that I need need to have Cloud Armor and WAF, the L4 Loadbalancer doesn't support it. A HTTPS(S) Load Balancer is needed in order for Cloud Armor to work.
In order to have Nginx Ingress controller working with the new HTTPS(S) LoadBalancer we need to change the type: LoadBalancer on the Nginx Ingress Controller service to ClusterIP instead, and add the NEG annotation to it cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'. We do that because we don't want it to generate a L4 LoadBalancer for us. Instead, we will manually create an HTTP(S) LoadBalancer and bind it to the ingress-nginx-controller through its NEG annotation. This binding will happen later when we set our Nginx Ingress Controller deployment as the Backend Service of our HTTPS LoadBalancer. So back to the Nginx Ingress Controller Service, it will end up like this:
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
spec:
type: ClusterIP
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv4
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
If you install the Nginx Ingress Controller using HELM you need to overwrite the config to add the NEG annotation to the service. So the values.yaml would look something like this:
controller:
service:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
To install it, add the ingress-nginx to the helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Then install it:
helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
The next steps will be:
ZONE=us-central1-a
CLUSTER_NAME=<cluster-name>
HEALTH_CHECK_NAME=nginx-ingress-controller-health-check
NETWORK_NAME=<network-name>
CERTIFICATE_NAME=self-managed-exp-<day>-<month>-<year>
NETWORK_TAGS=$(gcloud compute instances describe \
$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') \
--zone=$ZONE --format="value(tags.items[0])")
Create an Static IP Address (skip if you already have):
Has to be Premium tier
gcloud compute addresses create ${CLUSTER_NAME}-loadbalancer-ip \
--global \
--ip-version IPV4
Create a Firewall rule allowing the L7 HTTP(S) Load Balancer to access our cluster
gcloud compute firewall-rules create ${CLUSTER_NAME}-allow-tcp-loadbalancer \
--allow tcp:80 \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags $NETWORK_TAGS \
--network $NETWORK_NAME
Create a Health Check for our to-be-created Backend Service
gcloud compute health-checks create http ${CLUSTER_NAME}-nginx-health-check \
--port 80 \
--check-interval 60 \
--unhealthy-threshold 3 \
--healthy-threshold 1 \
--timeout 5 \
--request-path /healthz
Create a Backend Service which is used to inform the LoadBalancer how to connect and distribute trafic to the pods.
gcloud compute backend-services create ${CLUSTER_NAME}-backend-service \
--load-balancing-scheme=EXTERNAL \
--protocol=HTTP \
--port-name=http \
--health-checks=${CLUSTER_NAME}-nginx-health-check \
--global
Now it's the time we add the Nginx NEG service (the one annotated earlier) to the back end service created on the previous step:
gcloud compute backend-services add-backend ${CLUSTER_NAME}-backend-service \
--network-endpoint-group=ingress-nginx-80-neg \
--network-endpoint-group-zone=$ZONE \
--balancing-mode=RATE \
--capacity-scaler=1.0 \
--max-rate-per-endpoint=100 \
--global
Create the load balancer itself (URL MAPS)
gcloud compute url-maps create ${CLUSTER_NAME}-loadbalancer \
--default-service ${CLUSTER_NAME}-backend-service
Create a Self Managed Certificate. (it may be a Google-managed certificate but here we will cover the self-managed).
gcloud compute ssl-certificates create $CERTIFICATE_NAME \
--certificate=my-cert.pem \
--private-key=my-cert-key.pem \
--global
Finally, I will setup the Loadbalancer frontend through the Console interface which is way easier.
To create the LoadBalancer front end, enter the Loadbalancer on Console and click on "Edit".
The Frontend configuration tab will be incomplete. Go there
Click on "ADD FRONTEND IP AND PORT"
Give it a name and select HTTPS on the field Protocol.
On IP Address change from Ephemeral to your previously allocated static IP
Select your certificate and mark Enable HTTP to HTTPS redirect if you want. (I did)
Save the LoadBalancer.
The entering the LoadBalancer page we should see our nginx instance(s) healthy and green. In my case I've setup the Nginx Ingress Controller to have 4 replicas:
Finally, we just need to point our domains to the LoadBalancer IP and create our Ingress file.
NOTE: The Ingress now won't handle the certificate. The certificate will now be managed by the LoadBalancer externally. So the Ingress won't have the tls definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/upstream-fail-timeout: "1200"
nginx.ingress.kubernetes.io/configuration-snippet: |
set $http_origin "${scheme}://${host}";
more_set_headers "server: hide";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "Referrer-Policy: strict-origin";
name: ingress-nginx
namespace: prod
spec:
rules:
- host: app.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
You can create the Nginx as a service of type LoadBalancer and give it a NEG annotation as per this google documentation.
https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing
Then you can use this NEG as a backend service (target) for HTTP(S) load balancing
You can use the gcloud commands from this article
https://hodo.dev/posts/post-27-gcp-using-neg/
Related
I have two services running in my k8s.
I am trying to access my wallet service from my user service but my curl cmd just returns 504 gateway timeout.
here is my ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dev-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/rewrite-target: /api/v1$uri
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/v1/wallet
pathType: Prefix
backend:
service:
name: wallet-service
port:
number: 80
- path: /api/v1/user
pathType: Prefix
backend:
service:
name: accounts-service
port:
number: 80
this is the way I passed the env in my account service.
http://wallet-service:3007
and I log the URL when hitting my endpoint
curl http://EXTERNAL_IP/api/v1/user/health/wallet
every other non-related endpoint works.
Any help is appreciated
I am running Azure Kubernetes
I have two services running in my k8s.
Do you have an actual K8S service?
apiVerison: ...
kind: Service
Check it with kubectl get svc -A and you should see your services there
Check to see that your pods are exposed to the services
kubectl get endpoints -A
Are you running on a cloud provider (GCP, Azure, AWS, etc), if so check your security configuration as well (NSG for Azure, a security policy for AWS, etc)
Check inner communication :
# Log into one of the pods
kubectl exec -n <namespace> <pod name> sh
# Try to connect to the other service with the FQDN
curl -sv <servicename>.<namespace>.svc.cluster.local
Update:
You commented that you are running on Azure, check the desired ports are opened under the NSG.
Assuming that you are running AKS you need to find out the MS_xxxx resource and your NSG group will be located under this resource group, edit it and open the desired ports
You are trying to connect to http://wallet-service:3007/api/v1/wallet/health - Where did you get the 3007 port?
What I wanna accomplish
I'm trying to connect an external HTTPS (L7) load balancer with an NGINX Ingress exposed as a zonal Network Endpoint Group (NEG). My Kubernetes cluster (in GKE) contains a couple of web application deployments that I've exposed as a ClusterIP service.
I know that the NGINX Ingress object can be directly exposed as a TCP load balancer. But, this is not what I want. Instead in my architecture, I want to load balance the HTTPS requests with an external HTTPS load balancer. I want this external load balancer to provide SSL/TLS termination and forward HTTP requests to my Ingress resource.
The ideal architecture would look like this:
HTTPS requests --> external HTTPS load balancer --> HTTP request --> NGINX Ingress zonal NEG --> appropriate web application
I'd like to add the zonal NEGs from the NGINX Ingress as the backends for the HTTPS load balancer. This is where things fall apart.
What I've done
NGINX Ingress config
I'm using the default NGINX Ingress config from the official kubernetes/ingress-nginx project. Specifically, this YAML file https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml.
Note that, I've changed the NGINX-controller Service section as follows:
Added NEG annotation
Changed the Service type from LoadBalancer to ClusterIP.
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
# added NEG annotation
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "NGINX_NEG"}}}'
labels:
helm.sh/chart: ingress-nginx-3.30.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.46.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
NGINX Ingress routing
I've tested the path based routing rules for the NGINX Ingress to my web applications independently. This works when the NGINX Ingress is configured with a TCP Load Balancer. I've set up my application Deployment and Service configs the usual way.
External HTTPS Load Balancer
I created an external HTTPS load balancer with the following settings:
Backend: added the zonal NEGs named NGINX_NEG as the backends. The backend is configured to accept HTTP requests on port 80. I configured the health check on the serving port via the TCP protocol. I added the firewall rules to allow incoming traffic from 130.211.0.0/22 and 35.191.0.0/16 as mentioned here https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#traffic_does_not_reach_the_endpoints
What's not working
Soon after the external load balancer is set up, I can see that GCP creates a new endpoint under one of the zonal NEGs. But this shows as "Unhealthy". Requests to the external HTTPS load balancer return a 502 error.
I'm not sure where to start debugging this configuration in GCP logging. I've enabled logging for the health check but nothing shows up in the logs.
I configured the health check on the /healthz path of the NGINX Ingress controller. That didn't seem to work either.
Any tips on how to get this to work will be much appreciated. Thanks!
Edit 1: As requested, I ran the kubectl get svcneg -o yaml --namespace=<namespace>, here's the output
apiVersion: networking.gke.io/v1beta1
kind: ServiceNetworkEndpointGroup
metadata:
creationTimestamp: "2021-05-07T19:04:01Z"
finalizers:
- networking.gke.io/neg-finalizer
generation: 418
labels:
networking.gke.io/managed-by: neg-controller
networking.gke.io/service-name: ingress-nginx-controller
networking.gke.io/service-port: "80"
name: NGINX_NEG
namespace: ingress-nginx
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: true
kind: Service
name: ingress-nginx-controller
uid: <unique ID>
resourceVersion: "2922506"
selfLink: /apis/networking.gke.io/v1beta1/namespaces/ingress-nginx/servicenetworkendpointgroups/NGINX_NEG
uid: <unique ID>
spec: {}
status:
conditions:
- lastTransitionTime: "2021-05-07T19:04:08Z"
message: ""
reason: NegInitializationSuccessful
status: "True"
type: Initialized
- lastTransitionTime: "2021-05-07T19:04:10Z"
message: ""
reason: NegSyncSuccessful
status: "True"
type: Synced
lastSyncTime: "2021-05-10T15:02:06Z"
networkEndpointGroups:
- id: <id1>
networkEndpointType: GCE_VM_IP_PORT
selfLink: https://www.googleapis.com/compute/v1/projects/<project>/zones/us-central1-a/networkEndpointGroups/NGINX_NEG
- id: <id2>
networkEndpointType: GCE_VM_IP_PORT
selfLink: https://www.googleapis.com/compute/v1/projects/<project>/zones/us-central1-b/networkEndpointGroups/NGINX_NEG
- id: <id3>
networkEndpointType: GCE_VM_IP_PORT
selfLink: https://www.googleapis.com/compute/v1/projects/<project>/zones/us-central1-f/networkEndpointGroups/NGINX_NEG
As per my understanding, your issue is - “when an external load balancer is set up, GCP creates a new endpoint under one of the zonal NEGs and it shows "Unhealthy" and requests to the external HTTPS load balancer which return a 502 error”.
Essentially, the Service's annotation, cloud.google.com/neg: '{"ingress": true}', enables container-native load balancing. After creating the Ingress, an HTTP(S) load balancer is created in the project, and NEGs are created in each zone in which the cluster runs. The endpoints in the NEG and the endpoints of the Service are kept in sync.
Refer to the link [1].
New endpoints generally become reachable after attaching them to the load balancer, provided that they respond to health checks. You might encounter 502 errors or rejected connections if traffic cannot reach the endpoints.
One of your endpoints in zonal NEG is showing unhealthy so please confirm the status of other endpoints and how many endpoints are spread across the zones in the backend.
If all backends are unhealthy, then your firewall, Ingress, or service might be misconfigured.
You can run following command to check if your endpoints are healthy or not and refer link [2] for the same -
gcloud compute network-endpoint-groups list-network-endpoints NAME \ --zone=ZONE
To troubleshoot traffic that is not reaching the endpoints, verify that health check firewall rules allow incoming TCP traffic to your endpoints in the 130.211.0.0/22 and 35.191.0.0/16 ranges. But as you mentioned you have configured this rule correctly. Please refer link [3] for health check Configuration.
Run the Curl command against your LB IP to check for responses -
Curl [LB IP]
[1] https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb
[2] https://cloud.google.com/load-balancing/docs/negs/zonal-neg-concepts#troubleshooting
[3] https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks
I had a couple of questions regarding kubernetes ingress service [/controllers]
For example I have an nginx frontend image that I am trying to run with kubectl -
kubectl run <deployment> --image <repo> --port <internal-nginx-port>.
Now I tried to expose this to the outer world with a service -
kubectl expose deployment <deployment> --target-port <port>.
Then tried to create an ingress service with the following nignx-ing.yaml -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: urtutorsv2ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "coreos"
spec:
backend:
serviceName: <service>
servicePort: <port>
Where my ingress.global-static-ip-name is correctly created & available
in Google cloud console.
[I am assuming the service port here is the port I want on my "coreos" IP , so I set it to 80 initially which didn't work so I tried setting it same as the specified in the first step but it still didn't work.]
So, the issue is I am not able to access the frontend at both the urls
http://COREOS_IP, http://COREOS_IPIP:
Which is why I tried to use -
kubectl expose deployment <deployment> --target-port <port>. --type NodePort
to see if it worked with a NodePort & I was able to access the frontend.
So, I am thinking there might be a configuration mistake here because of which I am not getting results with the ingress.
Can anyone here help debug / fix the issue ?
Yeah, the service is there. I tried to check the status with - kubectl get services, kubectl describe service k8urtutorsv2. It showed the service. I tried editing it & saved the nodeport value. the thing is it works with nodeport but not 80 or 443.
You cannot directly expose service on the port 80 or 443.
The available range of exposed services is predefined in the kube-api configuration by the service-node-port-range option with the default value 30000-32767.
Running Kubernetes on GKE
Installed Nginx controller with latest stable release by using helm.
Everythings works well, except adding the whitelist-source-range annotation results in that I'm completely locked out from my service.
Ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: staging-ingress
namespace: staging
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/whitelist-source-range: "x.x.x.x, y.y.y.y"
spec:
rules:
- host: staging.com
http:
paths:
- path: /
backend:
serviceName:staging-service
servicePort: 80
I connected to the controller pod and checked the nginx config and found this:
# Deny for staging.com/
geo $the_real_ip $deny_5b3266e9d666401cb7ac676a73d8d5ae {
default 1;
x.x.x.x 0;
y.y.y.y 0;
}
It looks like he is locking me out instead of whitelist this IP's. But it also locking out all other addresses... I get 403 by going from staging.com host.
Yes. However, I figured out by myself. Your service has to be enabled externalTrafficPolicy: Local. That means that the actual client IP should be used instead of the internal cluster IP.
To accomplish this run
kubectl patch svc nginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Your nginx controller service has to be set as externalTrafficPolicy: Local. That means that the actual client IP will be used instead of cluster's internal IP.
You need to get the real service name from kubectl get svc command. The service is something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nobby-leopard-nginx-ingress-controller LoadBalancer 10.0.139.37 40.83.166.29 80:31223/TCP,443:30766/TCP 2d
nobby-leopard-nginx-ingress-controller is the service name you want to use.
To finish this, run
kubectl patch svc nobby-leopardnginx-ingress-controller -p '{"spec":{"externalTrafficPolicy":"Local"}}'
When you setting up a new nginx controller, you can use the command below:
helm install stable/nginx-ingress \
--namespace kube-system \
--set controller.service.externalTrafficPolicy=Local`
to have a nginx ingress controller accept whitelist after installing.
EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1