I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444
Related
Let me explain what the deployment consists of. First of all I created a Cloud SQL db by importing some data. To connect the db to the application I used cloud-sql-proxy and so far everything works.
I created a kubernetes cluster in which there is a pod containing the Docker container of the application that I want to depoly and so far everything works ... To reach the application in https then I followed several online guides (https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#console , etc.), all converge on using a service and an ingress kubernetes. The first one maps the 8080 of spring to the 80 while the second one creates a load balacer that exposes a frontend in https. I configured a health-check, I created a certificate (google managed) associated to a domain which maps the static ip assigned to the ingress.
Apparently everything works but as soon as you try to reach from the browser the address https://example.org/ you are correctly redirected to the login page ( http://example.org/login ) but as you can see it switches to the HTTP protocol and obviously a 404 is returned by google since http is disabled. Forcing https on the address to which it redirects you then ( https://example.org/login ) for some absurd reason adds "www" in front of the domain name ( https://www.example.org/login ). If you try not to use the domain by switching to the static IP the www problem disappears... However, every time you make a request in HTTPS it keeps changing to HTTP.
P.S. the general goal would be to have http up to the load balancer (google's internal network) and then have https between the load balancer and the client.
Can anyone help me? If it helps I post the yaml file of the deployment. Thank you very much!
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: my-app # Label for the Deployment
name: my-app # Name of Deployment
spec:
minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
selector:
matchLabels:
run: my-app
template: # Pod template
metadata:
labels:
run: my-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: eu.gcr.io/my-app/my-app-production:latest # Application to run in Deployment's Pods
name: my-app-production # Container name
# Note: The following line is necessary only on clusters running GKE v1.11 and lower.
# For details, see https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#align_rollouts
ports:
- containerPort: 8080
protocol: TCP
- image: gcr.io/cloudsql-docker/gce-proxy:1.17
name: cloud-sql-proxy
command:
- "/cloud_sql_proxy"
- "-instances=my-app:europe-west6:my-app-cloud-sql-instance=tcp:3306"
- "-credential_file=/secrets/service_account.json"
securityContext:
runAsNonRoot: true
volumeMounts:
- name: my-app-service-account-secret-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: my-app-service-account-secret-volume
secret:
secretName: my-app-service-account-secret
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-app-health-check
spec:
healthCheck:
checkIntervalSec: 60
port: 8080
type: HTTP
requestPath: /health/check
---
apiVersion: v1
kind: Service
metadata:
name: my-app-svc # Name of Service
annotations:
cloud.google.com/neg: '{"ingress": true}' # Creates a NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "my-app-health-check"}'
spec: # Service's specification
type: ClusterIP
selector:
run: my-app # Selects Pods labelled run: neg-demo-app
ports:
- port: 80 # Service's port
protocol: TCP
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ing
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "example-org"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: my-app-svc
servicePort: 80
tls:
- secretName: example-org
hosts:
- example.org
---
As I mention in the comment section, you can redirect HTTP to HTTPS.
Google Cloud have quite good documentation and you can find there step by step guides, including firewall configurations or tests. You can find this guide here.
I would also suggest you to read also docs like:
Traffic management overview for external HTTP(S) load balancers
Setting up traffic management for external HTTP(S) load balancers
Routing and traffic management
As alternative you could check Nginx Ingress with proper annotation (force-ssl-redirect). Some examples can be found here.
Scenario:
I have Four(4) Pods, Payroll,internal,external,mysql.
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
Kindly suggest what is missing part? I made the below network policy. But my pod is unable to communicate with 'any' pod.
Hence it has achieved the given target, but practically unable to access other pods. Below are my network policy details.
master $ k describe netpol/internal-policy
Name: internal-policy
Namespace: default
Created on: 2020-02-20 02:15:06 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: name=internal
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: name=payroll
----------
To Port: 3306/TCP
To:
PodSelector: name=mysql
Policy Types: Egress
Policy YAML
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- to:
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306 (edited)
Hence it has achieved the given target, but practically unable to
access other pods.
If I understand you correctly you achieved the goal defined in your network policy and all your Pods wearing label name: internal are currently able to communicate both with payroll (on port 8080) and mysql (on port 3306) Pods, right ?
Please correct me if I'm wrong but I see some contradiction in your statement. On the one hand you want your internal Pods to be able to communicate only with very specific set of Pods and connect with them using specified ports:
I want internal pod to only access:
a. Internal > Payroll on port 8080
b. Internal > mysql on port 3306
and on the other, you seem surprised that they can't access any other Pods:
Hence it has achieved the given target, but practically unable to
access other pods.
Keep in mind that when you apply some NetworkPolicy rule on specific set of Pods, at the same time the dafault deny all rule is implicitly applied on selected Pods (unless you decide to reconfigure the default policy to make it work the way you want).
As you can read here:
Pods become isolated by having a NetworkPolicy that selects them. Once
there is any NetworkPolicy in a namespace selecting a particular pod,
that pod will reject any connections that are not allowed by any
NetworkPolicy. (Other pods in the namespace that are not selected by
any NetworkPolicy will continue to accept all traffic.)
The above applies also to egress rules.
If currently your internal Pods have access only to payroll and mysql Pod on specified ports, everything works as it is supposed to work.
If you are interested in denying all other traffic to your payroll and mysql Pods, you should apply an ingress rule on those Pods rather than defining egress on Pods which are supposed to communicate with them, but at the same time they should not be deprived of the ability to communicate with other Pods.
Please let me know if it helped. If something is not clear or my assumption was wrong, also please let me know and don't hesitate to ask additional questions.
I have a custom kubernetes cluster on a serve with public IP and DNS pointing to it (also wildcard).
Gitlab was configured with the cluster following this guide: https://gitlab.touch4it.com/help/user/project/clusters/index#add-existing-kubernetes-cluster
However, after installing Ingress, the ingress endpoint is never detected:
I tried patching the object in k8s, like so
externalIPs: (was empty)
- 1.2.3.4
externalTrafficPolicy: local (was cluster)
I suspect that the problem is empty ingress (scroll to the end) object then calling:
# kubectl get service ingress-nginx-ingress-controller -n gitlab-managed-apps -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-11-20T08:57:18Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.22.1
component: controller
heritage: Tiller
release: ingress
name: ingress-nginx-ingress-controller
namespace: gitlab-managed-apps
resourceVersion: "3940"
selfLink: /api/v1/namespaces/gitlab-managed-apps/services/ingress-nginx-ingress-controller
uid: c175afcc-0b73-11ea-91ec-5254008dd01b
spec:
clusterIP: 10.107.35.248
externalIPs:
- 1.2.3.4 # (public IP)
externalTrafficPolicy: Local
healthCheckNodePort: 30737
ports:
- name: http
nodePort: 31972
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31746
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: ingress
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
But Gitlab still cant find the ingress endpoint. I tried restarting cluster and Gitlab.
The network inspection in Gitlab always shows this response:
...
name ingress
status installed
status_reason null
version 1.22.1
external_ip null
external_hostname null
update_available false
can_uninstall false
...
Any ideas how to have a working Ingress Endpoint?
GitLab: 12.4.3 (4d477238500) k8s: 1.16.3-00
I had the exact same issue as you, and I finally figured out how to solve it.
The first to understand, is that on bare metal, you can't make it working without using MetalLB, because it calls the required Kubernetes APIs making it accepting the IP address you give to the Service of LoadBalancer type.
So first step is to deploy MetalLB to your cluster.
Then you need to have another machine, running a service like NGiNX or HAproxy or whatever can do some load balancing.
Last but not least, you have to give the Load Balancer machine IP address to MetalLB so that it can assign it to the Service.
Usually MetalLB requires a range of IP addresses, but you can also give one IP address like I did:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
This way, MetalLB will assign the IP address to the Service with type LoadBalancer and Gitlab will finally find the IP address.
WARNING: MetalLB will assign only once an IP address. If you need many Service with type LoadBalancer, you will need many machines running NGiNX/HAproxy and so on and add its IP address in the MetalLB addresses pool.
For your information, I've posted all the technical details to my Gitlab issue here.
I have an application that can receive commands from a specific port like so:
echo <command> | nc <hostname> <port>
In this case it is opening port 22082, I believe in it's Docker container.
When I place this application into a kubernetes pod, I need to expose it by creating a kubernetes service. Here is my service:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
targetPort: 22082
#type: NodePort
externalIPs:
- 10.10.30.19
NOTE: I commented out NodePort because I haven't been able to expose the port using that method. Whenever I use sudo netstat -nlp | grep 22282 I get nothing.
Using an external IP i'm able to find the port and connect to it using netcat, but whenever I issue a command over the port, it just hangs.
Normally I should be able to issue a 'help' command and get information on the app. With kubernetes I can't get that same output.
Now, if I use hostNetwork: true in my app yaml (not the service), I can connect to the port and get my 'help' info.
What could be keeping my command from reaching the app while not using hostNetwork configuration?
Thanks
UPDATE: Noticed this message from sudo iptables --list:
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 172.21.155.23 /* default/commander: has no endpoints */ tcp dpt:22282 reject-with icmp-port-unreachable
UPDATE #2: I solved the above error by setting spec.template.metadata.labels.app to commander. I still, however, am experiencing an inability to send any command to the app.
Thanks to #sfgroups I discovered that I needed to set an actual nodePort like so:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
nodePort: 32282
targetPort: 22082
type: NodePort
Pretty odd behavior, makes me wonder what the point of the port field even is!
EDIT: The whole point of my setup is to achieve (if possible) the following :
I have multiple k8s nodes
When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.
I should be able to easily setup that IP (like in my service .yml definition)
I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.
In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.
I found out that:
I have to use Service
I can't use the LoadBalancer type, as it's only for compatible cloud providers (like GCE or AWS).
I should use ExternalIPs
Ingress Resources are some kind of reverse proxy ?
My yaml file is :
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's ClusterIP.
What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming ?
EDIT :
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
Thanks.
First of all run this command:
kubectl get -n namespace services
Above command will return output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to backend service run the following command.
kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
and to assign external IP to frontend service run this command.
kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
Now get namespace service to check either external IPs assignment:
kubectl get -n namespace services
We get an output like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
Cheers!!! Kubernetes External IPs are now assigned .
If this is just for testing, then try
kubectl port-forward service/nginx-service 80:80
Then you can
curl http://localhost:80
A solution that could work (and not only for testing, though it has its shortcomings) is to set your Pod to map the host network with the hostNetwork spec field set to true.
It means that you won't need a service to expose your Pod, as it will always be accessible on your host via a single port (the containerPort you specified in the manifest). No need to keep a DNS mapping record in that case.
This also means that you can only run a single instance of this Pod on a given node (talking about shortcomings...). As such, it makes it a good candidate for a DaemonSet object.
If your Pod still needs to access/resolve internal Kubernetes hostnames, you need to set the dnsPolicy spec field set to ClusterFirstWithNoHostNet. This setting will enable your pod to access the K8S DNS service.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx-reverse-proxy
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
tolerations: # allow a Pod instance to run on Master - optional
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: nginx
name: nginx
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
EDIT: I was put on this track thanks to the the ingress-nginx documentation
You can just Patch an External IP
CMD: $ kubectl patch svc svc_name -p '{"spec":{"externalIPs":["your_external_ip"]}}'
Eg:- $ kubectl patch svc kubernetes -p '{"spec":{"externalIPs":["10.2.8.19"]}}'
you can try kube-keepalived-vip configurtion to route the traffic. https://github.com/kubernetes/contrib/tree/master/keepalived-vip
You can try to add "type: NodePort" in your yaml file for the service and then you'll have a port to access it via the web browser or from the outside. For my case, it helped.
I don't know if that helps in your particular case but what I did (and I'm on a Bare Metal cluster) was to use the LoadBalancer and set the loadBalancerIP as well as the externalIPs to my server IP as you did it.
After that the correct external IP showed up for the load balancer.
Always use the namespace flag either before or after the service name, because Namespace-based scoping is applicable for deployments and services and this points out to the service that is tagged to a specific namespace. kubectl patch svc service-name -n namespace -p '{"spec":{"externalIPs":["IP"]}}'
Just include additional option.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service --external-ip=1.1.1.1