MetalLB Cannot access service from outside of k8s cluster - nginx

I'm getting problem accessing k8s service from outside of cluster
K8s version: v1.25
MetalLB version: 0.13 (Flannel v0.20)
IP pool
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 151.62.196.222-151.62.196.222
Advertisement
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: adv
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
When I check services, I saw the nginx-service has been assigned IP (EXTERNAL-IP: 151.62.196.222)
But when I try to access, it returns me error
curl: (7) Failed to connect to 151.62.196.222 port 80 after 15 ms: Connection refused
Anyone experienced the same problem? Thanks!
I tried to access it from the cluster, and it works, returned me the nginx welcome page

Related

400: Bad Request blog page via http/https SSL-enabled k3s deployment

Using nginx-ingress controller and metallb for my loadbalancer in my k3s raspberry pi cluster. Trying to access my blog site but I get white page with 400: Bad Request.
I'm using Cloudflare to managed my domain and SSL/TLS mode is on "Full". Created an A name "Blog" and pointed the content to my public external IP. I opened the Loadbalancer IP address on my router exposing 80 and 433. What am I missing. I've been pulling my hair with his issues for days now. Here's the example of my k3s entire deployment
apiVersion: v1
kind: Service
metadata:
namespace: nginx
name: nginx-web
labels:
app: nginx-web
spec:
ports:
# the port that this service should serve on
- port: 8000
targetPort: 80
protocol: TCP
selector:
app: nginx-web
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nginx
labels:
app: nginx-web
name: nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: nginx-web
template:
metadata:
namespace: nginx
labels:
app: nginx-web
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx:latest
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-web
labels:
app: nginx-web
namespace: nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- blog.example.com
secretName: blog-example-com-tls
rules:
- host: blog.example.com
http:
paths:
- backend:
service:
name: nginx-web
port:
number: 80
path: /
pathType: Prefix

kubernetes nginx deployment ERR_CONNECTION_REFUSED

I use minikube to deploy example nginx image.
I want to access nginx from localhost ex: http://127.0.0.1:8080
I get ERR_CONNECTION_REFUSED
kubectl apply -f nginx.yml
deployment.apps/nginx-deployment-name created
kubectl apply -f nginx-service.yml
service/nginx-service-name created
Deployment yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-name
labels:
name: nginx-deployment-name-label
spec:
replicas: 3
selector:
matchLabels:
name: nginx-template-name
template:
metadata:
labels:
name: nginx-template-name
spec:
containers:
- name: nginx-container-name
image: nginx
ports:
- containerPort: 8080
Service yml
apiVersion: v1
kind: Service
metadata:
name: nginx-service-name
spec:
selector:
name: nginx-deployment-name-label
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30003
protocol: TCP
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Then if you are on minikube for example:
➜ minikube ip
192.168.49.2
curl http://192.168.49.2:30080/

yaml configuration for Istio and gRPC

I am working on a POC for Istio + gRPC, the Istio version is 1.6, but I could not see any gRPC traffic to my pods.
I suspect my Istio Gateway or VirtualService miss something, but I could not figure out what's wrong here? Could anybody help review my yaml file and correct me what's missing or wrong in my yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: syslogserver
name: syslogserver
namespace: mynamespace
spec:
selector:
matchLabels:
app: syslogserver
replicas: 1
template:
metadata:
labels:
app: syslogserver
spec:
containers:
- name: syslogserver
image: docker.io/grpc-syslog:latest
imagePullPolicy: Always
ports:
- containerPort: 5555
imagePullSecrets:
- name: pull-image-credential
---
apiVersion: v1
kind: Service
metadata:
name: syslogserver
namespace: mynamespace
labels:
app: syslogserver
spec:
selector:
app: syslogserver
ports:
- name: grpc
port: 6666
protocol: TCP
targetPort: 5555
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: xyz-ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7777
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: v1
kind: Service
metadata:
name: xyz-istio-ingressgateway
namespace: istio-system
labels:
app: xyz-istio-ingressgateway
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
ports:
- protocol: TCP
nodePort: 32555
port: 7777
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: xyz-ingress-gateway-virtualservice
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- xyz-ingress-gateway
#tls:
http:
- match:
- port: 7777
route:
- destination:
host: syslogserver.mynamespace.svc.cluster.local
port:
number: 6666
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: xyz-destinationrule
namespace: istio-system
spec:
host: syslogserver.mynamespace.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Please give your guidance, thanks.
From what I see the service name: xyz-istio-ingressgateway should be deleted, as that's not how you communicate when using Istio.
Instead you should use istio ingress gateway, combined with a gateway, virtual service and destination rule.
If you've choosen port number 7777 on your gateway, you have to open this port on istio ingress gateway, there are few ways to do that in this stackoverflow question. There are the default istio ingress gateway values.
After you configure the port you can use kubectl get svc istio-ingressgateway -n istio-system to get the istio ingress gateway external IP.
If the external IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is pending, then your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
The rest of your configuration looks fine for me. Just a reminder about injecting a sidecar proxy to your pods.

I am trying to host nginx using Kubernetes ReplicationController. Post successful hosting yet it is not reachable via the host system

replicationcontroller.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
nginx-service.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
Commands:
kubectl create -f replicationcontroller.yml
kubectl create -f nginx-service.yml
I can't reproduce your error; the manifest works for me.
However, if you're using Minikube, you should be aware that Minikube has it's own virtual machine with IP address. Please try:
curl $(minikube ip):31078

Kubernetes nginx ingress is not resolving services

Cloud: Google Cloud Platform.
I have the following configuration
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/*************/api
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
---
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
run: api
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: api
servicePort: 8080
All set. GKE saying that all deployments are okay, pods number are met and main ingress with nginx-ingress-controllers are set as well. But I'm not able to reach any of the services. Even application specific 404. Nothing. Like, it's not resolved at all.
Another related question I see to entrypoints. The first one through main-ingress. It created it's own LoadBalancer with own IP address. The second one address from nginx-ingress-controller. The second one is at least returning 404 from default backend, but also not pointing to expected api service.

Resources