I've got a pod with 2 containers, both running nginx. One is running on port 80, the other on port 88. I have no trouble accessing the one on port 80, but can't seem to access the one on port 88. When I try, I get:
This site can’t be reached
The connection was reset.
ERR_CONNECTION_RESET
So here's the details.
1) The container is defined in the deployment YAML as:
- name: rss-reader
image: nickchase/nginx-php-rss:v3
ports:
- containerPort: 88
2) I created the service with:
kubectl expose deployment rss-site --port=88 --target-port=88 --type=NodePort --name=backend
3) This created a service of:
root#kubeclient:/home/ubuntu# kubectl describe service backend
Name: backend
Namespace: default
Labels: app=web
Selector: app=web
Type: NodePort
IP: 11.1.250.209
Port: <unset> 88/TCP
NodePort: <unset> 31754/TCP
Endpoints: 10.200.41.2:88,10.200.9.2:88
Session Affinity: None
No events.
And when I tried to access it, I used the URL
http://[nodeip]:31754/index.php
Now, when I instantiate the container manually with Docker, this works.
So anybody have a clue what I'm missing here?
Thanks in advance...
My presumtion is that you're using the wrong access IP. Are you trying to access the minion's IP address and port 31754?
Related
I'm using GKE version 1.21.12-gke.1700 and I'm trying to configure externalTrafficPolicy to "local" on my nginx external load balancer (not ingress). After the change, nothing happens, and I still see the source as the internal IP for the kubernetes IP range instead of the client's IP.
This is my service's YAML:
apiVersion: v1
kind: Service
metadata:
name: nginx-ext
namespace: my-namespace
spec:
externalTrafficPolicy: Local
healthCheckNodePort: xxxxx
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerSourceRanges:
- x.x.x.x/32
ports:
- name: dashboard
port: 443
protocol: TCP
targetPort: 443
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
And the nginx logs:
*2 access forbidden by rule, client: 10.X.X.X
My goal is to make a restriction endpoint based (to deny all and allow only specific clients)
You can use curl to query the ip from the load balance, this is an example curl 202.0.113.120 . Please notice that the service.spec.externalTrafficPolicy set to Local in GKE will force to remove the nodes without service endpoints from the list of nodes eligible for load balanced traffic; so if you are applying the Local value to your external traffic policy, you will have at least one Service Endpoint. So based on this, it is important to deploy the service.spec.healthCheckNodePort . This port needs to be allowed in the ingress firewall rule, you can get the health check node port from your yaml file with this command:
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
You can follow this guide if you need more information about how the service load balancer type works in GKE and finally you can limit the traffic from outside at your external load balancer deploying loadBalancerSourceRanges. In the following link, you can find more information related on how to protect your applications from outside traffic.
Scenario:
I have Four(4) Pods, Payroll,internal,external,mysql.
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
Kindly suggest what is missing part? I made the below network policy. But my pod is unable to communicate with 'any' pod.
Hence it has achieved the given target, but practically unable to access other pods. Below are my network policy details.
master $ k describe netpol/internal-policy
Name: internal-policy
Namespace: default
Created on: 2020-02-20 02:15:06 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: name=internal
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: name=payroll
----------
To Port: 3306/TCP
To:
PodSelector: name=mysql
Policy Types: Egress
Policy YAML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306 (edited)
Hence it has achieved the given target, but practically unable to
access other pods.
If I understand you correctly you achieved the goal defined in your network policy and all your Pods wearing label name: internal are currently able to communicate both with payroll (on port 8080) and mysql (on port 3306) Pods, right ?
Please correct me if I'm wrong but I see some contradiction in your statement. On the one hand you want your internal Pods to be able to communicate only with very specific set of Pods and connect with them using specified ports:
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
and on the other, you seem surprised that they can't access any other Pods:
Hence it has achieved the given target, but practically unable to
access other pods.
Keep in mind that when you apply some NetworkPolicy rule on specific set of Pods, at the same time the dafault deny all rule is implicitly applied on selected Pods (unless you decide to reconfigure the default policy to make it work the way you want).
As you can read here:
Pods become isolated by having a NetworkPolicy that selects them. Once
there is any NetworkPolicy in a namespace selecting a particular pod,
that pod will reject any connections that are not allowed by any
NetworkPolicy. (Other pods in the namespace that are not selected by
any NetworkPolicy will continue to accept all traffic.)
The above applies also to egress rules.
If currently your internal Pods have access only to payroll and mysql Pod on specified ports, everything works as it is supposed to work.
If you are interested in denying all other traffic to your payroll and mysql Pods, you should apply an ingress rule on those Pods rather than defining egress on Pods which are supposed to communicate with them, but at the same time they should not be deprived of the ability to communicate with other Pods.
Please let me know if it helped. If something is not clear or my assumption was wrong, also please let me know and don't hesitate to ask additional questions.
Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the docs. However when I set the externalTrafficPolicy to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?
Load Balancer Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
Could be several things. A couple of suggestions:
Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
Try running a sniffer on your pod see if you are getting packets from the external source.
Try checking at logs of your application.
Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
Check the instance port is listening. (probably not if your health check is failing)
Hope it helps.
I have set up a working k8s cluster.
Each node of the cluster is inside network 10.11.12.0/24 (physical network). Over this network is running a flanneld (canal) cni.
Each node has another network interface (not managed by k8s) with cidr 192.168.0.0/24
When I deploy a service like:
kind: Service
apiVersion: v1
metadata:
name: my-awesome-webapp
spec:
selector:
server: serverA
ports:
- protocol: TCP
port: 80
targetPort: 8080
externalTrafficPolicy: Local
type: LoadBalancer
externalIPs:
- 192.168.0.163
The service is accessible at http://192.168.0.163, but the Pod receives source ip: 192.168.0.163 eth0 address of the server: not my source ip (192.168.0.94).
Deployment consists of 2 pods with the same spec.
Is possible to Pods to view my source ip m?
Anyone knows how to manage it? externalTrafficPolicy: Local seems not working.
Kubernetes change the source IP with the cluster/node IPs for which the details information can be found on this document. Kubernetes has a feature to preserve the client source IP which I believe you already are already aware.
Seems like a this is a bug in Kubernetes and there is already an open bug for this issue of below command not working properly.
externalTrafficPolicy: Local
I suggest to post on the bug to get more attention on the issue.
I have a Kubernetes cluster running 1.2.3 binaries along with flannel 0.5.5. I am using the GCE backend with IP forwarding enabled. For some reason, although I specify a specific Node's external IP address, it will not forward to the appropriate node.
Additionally I cannot create external load balancers, which the controller-manager says it can't find the gce instances that are the nodes, which are in ready state. I've looked at the source where the load balancer creation happens, my guess is it's either permission issues (I gave the cluster full permissions for gce) or it's not finding the metadata.
Here is an example of the services in question:
kind: "Service"
apiVersion: "v1"
metadata:
name: "client"
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
name: "insecure"
- protocol: "TCP"
port: 443
targetPort: 443
name: "secure"
selector:
name: "client"
sessionAffinity: "ClientIP"
externalIPs:
- "<Node External IP>"
And when I was trying to create the load balancer, it had the type: LoadBalancer.
Why would the forwarding to the Node IP not work? I have an idea as to the Load balancer issue, but if anyone has any insight?
So I eventually worked around this issue by creating an external loadbalancer. Only then did I have a valid external IP.