artifactory docker push error unknown: Method Not Allowed - artifactory

we are using artifactory pro license and installed artifactory through helm on kubernetes.
when we create a docker local repo(The Repository Path Method) and push docker image,
we get 405 method not allowed errror. (docker login/ pull is working normally.)
########## error msg
# docker push art2.bee0dev.lge.com/docker-local/hello-world
e07ee1baac5f: Pushing [==================================================>] 14.85kB
unknown: Method Not Allowed
##########
we are using haproxy load balancer that is used for TLS, in front of Nginx Ingress Controller.
(nginx ingress controller's http nodeport is 31071)
please help us how can we solve the problem.
The artifactory and haproxy settings are as follows.
########## value.yaml
global:
joinKeySecretName: "artbee-stg-joinkey-secret"
masterKeySecretName: "artbee-stg-masterkey-secret"
storageClass: "sa-stg-netapp8300-bee-blk-nonretain"
ingress:
enabled: true
defaultBackend:
enabled: false
hosts: ["art2.bee0dev.lge.com"]
routerPath: /
artifactoryPath: /artifactory/
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_header Server;
proxy_set_header X-JFrog-Override-Base-Url https://art2.bee0dev.lge.com;
labels: {}
tls: []
additionalRules: []
## Artifactory license.
artifactory:
name: artifactory
replicaCount: 1
image:
registry: releases-docker.jfrog.io
repository: jfrog/artifactory-pro
# tag:
pullPolicy: IfNotPresent
labels: {}
updateStrategy:
type: RollingUpdate
migration:
enabled: false
customInitContainersBegin: |
- name: "init-mount-permission-setup"
image: "{{ .Values.initContainerImage }}"
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
command:
- 'bash'
- '-c'
- if [ $(ls -la /var/opt/jfrog | grep artifactory | awk -F' ' '{print $3$4}') == 'rootroot' ]; then
echo "mount permission=> root:root";
echo "change mount permission to 1030:1030 " {{ .Values.artifactory.persistence.mountPath }};
chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }};
else
echo "already set. No change required.";
ls -la {{ .Values.artifactory.persistence.mountPath }};
fi
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: artifactory-volume
database:
maxOpenConnections: 80
tomcat:
maintenanceConnector:
port: 8091
connector:
maxThreads: 200
sendReasonPhrase: false
extraConfig: 'acceptCount="100"'
customPersistentVolumeClaim: {}
license:
## licenseKey is the license key in plain text. Use either this or the license.secret setting
licenseKey: "???"
secret:
dataKey:
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "20Gi"
cpu: "8"
javaOpts:
xms: "1g"
xmx: "12g"
admin:
ip: "127.0.0.1"
username: "admin"
password: "!swiit123"
secret:
dataKey:
service:
name: artifactory
type: ClusterIP
loadBalancerSourceRanges: []
annotations: {}
persistence:
mountPath: "/var/opt/jfrog/artifactory"
enabled: true
accessMode: ReadWriteOnce
size: 100Gi
type: file-system
storageClassName: "sa-stg-netapp8300-bee-blk-nonretain"
nginx:
enabled: false
##########
########## haproxy config
frontend cto-stage-http-frontend
bind 10.185.60.75:80
bind 10.185.60.76:80
bind 10.185.60.201:80
bind 10.185.60.75:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.76:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.201:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
mode http
option forwardfor
option accept-invalid-http-request
acl k8s-cto-stage hdr_end(host) -i -f /etc/haproxy/web-ide/cto-stage
use_backend k8s-cto-stage-http if k8s-cto-stage
backend k8s-cto-stage-http
mode http
redirect scheme https if !{ ssl_fc }
option tcp-check
balance roundrobin
server lgestgbee04v 10.185.60.78:31071 check fall 3 rise 2
##########

The request doesn't seem to be landing at the correct endpoint. Please remove the semi-colon from the docker command and retry again.
docker push art2.bee0dev.lge.com;/docker-local/hello-world
Try executing it like below,
docker push art2.bee0dev.lge.com/docker-local/hello-world

Related

Nginx-Ingress not picking up certificate from cert-manager

I'm currently trying to setup an application in K8S behind an nginx-ingress. The Certs should be generated by cert-manager and Let's Encrypt (Staging for now).
The application is in namespace prod, nginx-ingress-controller in namespace nginx and cert-manager lives in cert-manager namespace.
We setup a ClusterIssuer for Let's Encrypt staging and successfully generated a certificate (we can see it in the secrets and certificate resource). However, nginx-ingress-controller is still answering with the Kubernetes Ingress Controller Fake Certificate.
Here are some technical details:
Ingress
❯ kubectl describe ingress/forgerock
Name: forgerock
Labels: <none>
Namespace: prod
Address: someaws-id.elb.eu-central-1.amazonaws.com
Ingress Class: <none>
Default backend: <default>
TLS:
sslcertciam terminates ciam.test.fancycorp.com
Rules:
Host Path Backends
---- ---- --------
ciam.test.fancycorp.com
/am/json/authenticate am:80 (10.0.2.210:8081)
...
/am/extlogin am:80 (10.0.2.210:8081)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-stage
haproxy.router.openshift.io/cookie_name: route
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/body-size: 64m
nginx.ingress.kubernetes.io/enable-cors: false
nginx.ingress.kubernetes.io/proxy-body-size: 64m
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/proxy-read-timeout: 600
nginx.ingress.kubernetes.io/proxy-send-timeout: 600
nginx.ingress.kubernetes.io/send-timeout: 600
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
nginx.ingress.kubernetes.io/ssl-redirect: true
Events: <none>
Issuer:
❯ kubectl describe clusterissuer/letsencrypt-stage
Name: letsencrypt-stage
Namespace:
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2022-09-12T07:26:05Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:acme:
.:
f:email:
f:privateKeySecretRef:
.:
f:name:
f:server:
f:solvers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-09-12T07:26:05Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:acme:
.:
f:lastRegisteredEmail:
f:uri:
f:conditions:
Manager: controller
Operation: Update
Subresource: status
Time: 2022-09-12T07:26:06Z
Resource Version: 17749318
UID: fcbcbfff-b875-4ac4-805b-65ab0b4e1a93
Spec:
Acme:
Email: admin#fancycorp.com
Preferred Chain:
Private Key Secret Ref:
Name: letsencrypt-stage
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
http01:
Ingress:
Class: nginx
Status:
Acme:
Last Registered Email: admin#fancycorp.com
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/68184363
Conditions:
Last Transition Time: 2022-09-12T07:26:06Z
Message: The ACME account was registered with the ACME server
Observed Generation: 1
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
Certificate:
❯ kubectl describe cert/sslcertciam
Name: sslcertciam
Namespace: prod
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2022-09-12T07:40:04Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:ownerReferences:
.:
k:{"uid":"2a0af8f2-8166-4a8e-bb50-fd0aa906f844"}:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:secretName:
f:usages:
Manager: controller
Operation: Update
Time: 2022-09-12T07:40:04Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:notAfter:
f:notBefore:
f:renewalTime:
f:revision:
Manager: controller
Operation: Update
Subresource: status
Time: 2022-09-12T07:40:07Z
Owner References:
API Version: networking.k8s.io/v1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: forgerock
UID: 2a0af8f2-8166-4a8e-bb50-fd0aa906f844
Resource Version: 17753197
UID: 2484d1fe-5b80-4cbc-b2f8-7f4276e15a37
Spec:
Dns Names:
ciam.test.fancycorp.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-stage
Secret Name: sslcertciam
Usages:
digital signature
key encipherment
Status:
Conditions:
Last Transition Time: 2022-09-12T07:40:07Z
Message: Certificate is up to date and has not expired
Observed Generation: 1
Reason: Ready
Status: True
Type: Ready
Not After: 2022-12-11T06:40:05Z
Not Before: 2022-09-12T06:40:06Z
Renewal Time: 2022-11-11T06:40:05Z
Revision: 1
Events: <none>
Secret:
❯ kubectl describe secret/sslcertciam
Name: sslcertciam
Namespace: prod
Labels: <none>
Annotations: cert-manager.io/alt-names: ciam.test.fancycorp.com
cert-manager.io/certificate-name: sslcertciam
cert-manager.io/common-name: ciam.test.fancycorp.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group: cert-manager.io
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-stage
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 5741 bytes
tls.key: 1675 bytes
Certificate Request:
❯ kubectl describe certificaterequests/sslcertciam-p6qpg
Name: sslcertciam-p6qpg
Namespace: prod
Labels: <none>
Annotations: cert-manager.io/certificate-name: sslcertciam
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: sslcertciam-ztc8q
API Version: cert-manager.io/v1
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2022-09-12T07:40:05Z
Generate Name: sslcertciam-
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:cert-manager.io/certificate-name:
f:cert-manager.io/certificate-revision:
f:cert-manager.io/private-key-secret-name:
f:generateName:
f:ownerReferences:
.:
k:{"uid":"2484d1fe-5b80-4cbc-b2f8-7f4276e15a37"}:
f:spec:
.:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:request:
f:usages:
Manager: controller
Operation: Update
Time: 2022-09-12T07:40:05Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:certificate:
f:conditions:
Manager: controller
Operation: Update
Subresource: status
Time: 2022-09-12T07:40:06Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: sslcertciam
UID: 2484d1fe-5b80-4cbc-b2f8-7f4276e15a37
Resource Version: 17753174
UID: 2289de7b-f43f-4859-816b-b4a9794846ec
Spec:
Extra:
authentication.kubernetes.io/pod-name:
cert-manager-75947cd847-7gndz
authentication.kubernetes.io/pod-uid:
91415540-9113-4456-86d2-a0e28478718a
Groups:
system:serviceaccounts
system:serviceaccounts:cert-manager
system:authenticated
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-stage
Request: xxx
UID: 5be755b9-711c-49ac-a962-6b3a3f80d16e
Usages:
digital signature
key encipherment
Username: system:serviceaccount:cert-manager:cert-manager
Status:
Certificate: <base64-encoded-cert>
Conditions:
Last Transition Time: 2022-09-12T07:40:05Z
Message: Certificate request has been approved by cert-manager.io
Reason: cert-manager.io
Status: True
Type: Approved
Last Transition Time: 2022-09-12T07:40:06Z
Message: Certificate fetched from issuer successfully
Reason: Issued
Status: True
Type: Ready
Events: <none>
Curl:
❯ curl -v https://ciam.test.fancycorp.com/am/extlogin/ -k
* Trying xxx.xxx.xxx.xxx:443...
* Connected to ciam.test.fancycorp.com (xxx.xxx.xxx.xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Sep 12 07:43:15 2022 GMT
* expire date: Sep 12 07:43:15 2023 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multiplexing
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x126811e00)
> GET /am/extlogin/ HTTP/2
> Host: ciam.test.fancycorp.com
> user-agent: curl/7.79.1
> accept: */*
...
Update 1:
When running kubectl ingress-nginx certs --host ciam.test.fancycorp.com, I am also getting the Fake Certificate returned.
Found the issue and solution...
There was another ingress defined in another namespace that did define the same hostname, but failed to link to a proper secret with the TLS cert. When I deleted that one, it immediately worked.
Lessons learned: Be aware of impacts from other namespaces!

SSL/TLS passthrough NGINX-Ingress-Controller on Openshift Not Working

I have deployed NGINX-Operator and NGINX-Ingress-Controller per the following github and the secrets from devopscube.
The current setup is:
AWS Classic LB -> ROSA Cluster [Helm NGINX-Ingress-Controller -> NGINX-Ingress -> Service -> Pod]
Here is the YAML file I used to create the NGINX-Ingress-Controller Resource. You will see that enableTLSPassthrough is set to true. However, I am unsure this is taking effect. My goal here is end to end TLS encryption from client to the NGINX service/pod. Right now I am met with error code 400 when accessing in browser through http (http works perfect fine in hello-world set up).
"400 Bad Request The plain HTTP request was sent to HTTPS port"
kind: NginxIngress
apiVersion: charts.nginx.org/v1alpha1
metadata:
name: nginxingress
namespace: nginx-ingress
spec:
controller:
affinity: {}
appprotect:
enable: false
appprotectdos:
debug: false
enable: false
maxDaemons: 0
maxWorkers: 0
memory: 0
config:
annotations: {}
entries: {}
customPorts: []
defaultTLS:
secret: nginx-ingress/default-server-secret
enableCertManager: false
enableCustomResources: true
enableExternalDNS: false
enableLatencyMetrics: false
enableOIDC: false
enablePreviewPolicies: false
enableSnippets: false
enableTLSPassthrough: true
extraContainers: []
globalConfiguration:
create: false
spec: {}
healthStatus: false
healthStatusURI: /nginx-health
hostNetwork: false
image:
pullPolicy: IfNotPresent
repository: nginx/nginx-ingress
tag: 2.3.0-ubi
ingressClass: nginx
initContainers: []
kind: deployment
logLevel: 1
nginxDebug: false
nginxReloadTimeout: 60000
nginxStatus:
allowCidrs: 127.0.0.1
enable: true
port: 8080
nginxplus: false
nodeSelector: {}
pod:
annotations: {}
extraLabels: {}
priorityClassName: null
readyStatus:
enable: true
port: 8081
replicaCount: 1
reportIngressStatus:
annotations: {}
enable: true
enableLeaderElection: true
ingressLink: ''
resources:
requests:
cpu: 100m
memory: 128Mi
service:
annotations: {}
create: true
customPorts: []
externalIPs: []
externalTrafficPolicy: Local
extraLabels: {}
httpPort:
enable: true
nodePort: ''
port: 80
targetPort: 80
httpsPort:
enable: true
nodePort: ''
port: 443
targetPort: 443
loadBalancerIP: ''
loadBalancerSourceRanges: []
type: LoadBalancer
serviceAccount:
imagePullSecretName: ''
setAsDefaultIngress: true
terminationGracePeriodSeconds: 30
tolerations: []
volumeMounts: []
volumes: []
watchNamespace: ''
wildcardTLS:
secret: null
nginxServiceMesh:
enable: false
enableEgress: false
prometheus:
create: true
port: 9113
scheme: http
secret: ''
rbac:
create: true
Taking a look at the NGINX-Ingress-Controller pod logs on creation I can see nothing about TLS being enabled. A flag does get set in the args section once the pod deploys but I am still unsure this is working.
W0802 20:33:26.594545 1 flags.go:273] Ignoring unhandled arguments: []
I0802 20:33:26.594683 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.0 PlusFlag=false
I0802 20:33:26.594689 1 flags.go:191] Commit=979db22d8065b22fedb410c9b9c5875cf0a6dc66 Date=2022-07-12T08:51:24Z DirtyState=false Arch=linux/amd64 Go=go1.18.3
I0802 20:33:26.601340 1 main.go:210] Kubernetes version: 1.22.0
I0802 20:33:26.606551 1 main.go:326] Using nginx version: nginx/1.23.0
2022/08/02 20:33:26 [notice] 13#13: using the "epoll" event method
2022/08/02 20:33:26 [notice] 13#13: nginx/1.23.0
2022/08/02 20:33:26 [notice] 13#13: built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
2022/08/02 20:33:26 [notice] 13#13: OS: Linux 4.18.0-305.19.1.el8_4.x86_64
2022/08/02 20:33:26 [notice] 13#13: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/08/02 20:33:26 [notice] 13#13: start worker processes
2022/08/02 20:33:26 [notice] 13#13: start worker process 15
2022/08/02 20:33:26 [notice] 13#13: start worker process 16
2022/08/02 20:33:26 [notice] 13#13: start worker process 17
2022/08/02 20:33:26 [notice] 13#13: start worker process 18
I0802 20:33:26.630298 1 listener.go:54] Starting Prometheus listener on: :9113/metrics
I0802 20:33:26.630860 1 leaderelection.go:248] attempting to acquire leader lease nginx-ingress/nginxingress-nginx-ingress-leader-election...
I0802 20:33:26.639466 1 leaderelection.go:258] successfully acquired lease nginx-ingress/nginxingress-nginx-ingress-leader-election
Here is the Ingress Resource YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
# kubernetes.io/ingress.class: addon-http-application-routing
# nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
# nginx.ingress.kubernetes.io/proxy-redirect-from: https
# nginx.ingress.kubernetes.io/proxy-redirect-to: https
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# nginx.ingress.kubernetes.io/proxy-ssl-protocols: "HTTPS"
# nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
defaultBackend:
service:
name: nginx
port:
number: 443
ingressClassName: nginx
tls:
- hosts:
- nginx-tlssni.apps.clustername.openshiftapps.com
secretName: nginx-tls
rules:
- host: "nginx-tlssni.apps.clustername.openshiftapps.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: nginx
port:
number: 443
Thank you for your insight :)
there are many kinds of NGINX based ingress controllers. The two that are most easily confused are the NGINX INC ingress controller, and the CNCF Kubernetes Ingress Controller.
My understanding is that nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" is for the CNCF Kubernetes Ingress Controller.
Now to your question - based on this example, try changing your annotations to the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.org/ssl-services: "nginx" # Name of your k8s service with TLS
...

Extend helm upgrade cmd to add some field values

I'm installing ingress nginx using a modified yaml file
kubectl apply -f deploy.yaml
The yaml file is just the original deploy file but with added hostPorts for the deployment:
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
become:
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80 #<-- added
- name: https
containerPort: 443
protocol: TCP
hostPort: 443 #<-- added
- name: webhook
containerPort: 8443
protocol: TCP
hostPort: 8443 #<-- added
So this is working for me. But I would like to install ingress nginx using helm:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Is it possible to add the hostPort values using helm (-f values.yml)? I need to add hostPort in Deployment.spec.template.containers.ports, but I have two problems to write the correct values.yml file:
values.yml
# How to access the deployment?
spec:
template:
containers:
ports: # How to add field with existing containerPort value of each element in the array?
Two ways to find out:
You can take a closer look at the helm chart itself https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx
Here you can find deployment spec https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml
And under it, you can see, there's condition that enables hostPort https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml#L113
(Proper one) Always dig through values.yaml https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L90
and chart documentation https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md#:~:text=controller.hostPort.enabled
First of all, you already have hostPort in values.yaml. See following fragment:
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: false
ports:
# -- 'hostPort' http port
http: 80
# -- 'hostPort' https port
https: 443
You should turn it on in values.yaml:
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: true
After all - as you know - you can install your ingress via helm.
helm install -f values.yaml
About webhook - see here.
You can find hostPort also in deployment file. Here is one of the templates (controller-deployment.yaml):
In this file you can find three appearances of hostPort (value controller.hostPort.enabled - responsible for enabling or disabling the hostPort).
Here:
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ index $.Values.controller.hostPort.ports $key | default $value }}
{{- end }}
and here:
{{- range $key, $value := .Values.tcp }}
- name: {{ $key }}-tcp
containerPort: {{ $key }}
protocol: TCP
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ $key }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.udp }}
- name: {{ $key }}-udp
containerPort: {{ $key }}
protocol: UDP
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ $key }}
{{- end }}
{{- end }}
See also:
ingress-nginx Documentation on Github
nginx Documentation
NGINX - Helm Charts
Helm Documentation

Service.yaml throws null pointer error when running helm upgrade install

I am trying helm install for a sample application consisting of two microservices. I have created a solution level folder called charts and all subsequent helm specific resources (as per this example (LINK) .
When I execute helm upgrade --install microsvc-poc--release . from C:\Users\username\source\repos\MicroservicePOC\charts\microservice-poc (where values.yml is) I get error :
Error: template: microservicepoc/templates/service.yaml:8:18: executing "microservicepoc/templates/service.yaml" at <.Values.service.type>: nil pointer evaluating interface {}.type
I am not quite sure whats the exact issue that causes this behavior,I have set all possible defaults in values.yaml as below :
payments-app-service:
replicaCount: 3
image:
repository: golide/paymentsapi
pullPolicy: IfNotPresent
tag: "0.1.0"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: payments-svc.local
paths:
- "/payments-app"
autoscaling:
enabled: false
serviceAccount:
create: false
products-app-service:
replicaCount: 3
image:
repository: productsapi_productsapi
pullPolicy: IfNotPresent
tag: "latest"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: products-svc.local
paths:
- "/products-app"
autoscaling:
enabled: false
serviceAccount:
create: false
As a check I have opened service.yaml file and it throws syntax errors which I'm thinking to may be related to why helm install is failing :
Missed comma between flow control entries
This error is throwing on lines 6 and 15 for service.yaml file below :
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
What am I missing ?
I have tried recreating the chart afresh but when I try helm install I get the exact same error. Moreover service.yaml continues showing same syntax error ( I have not edited anything in service.yaml that would otherwise cause linting issues).
As the error describes, helm can't find the service field in the value.yaml file when rendering the template, and it caused the rendering to fail.
The services in your value.yaml file are located under the payments-app-service field and the products-app-service field. To access them, you need to pass {{ .Values.payments-app-service.service.type }} or {{ .Values.products-app-service.service.type }}
like:
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.products-app-service.service.type }}
ports:
- port: {{ .Values.products-app-service.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
It is recommended that you use helm better by reading the official documentation
helm doc

How do I mount file as ConfigMap inside DaemonSet?

I have following nginx config file (named nginx-daemonset.conf) that I want to use inside my Daemonset:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
I created a ConfigMap using following command: kubectl create configmap nginx2.conf --from-file=nginx-daemonset.conf
I have following DaemonSet (nginx-daemonset-deployment.yml) inside which I am trying to mount this ConfigMap - so the previous nginx config file is used:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: kube-system
labels:
k8s-app: nginx-daemonset
spec:
selector:
matchLabels:
name: nginx-daemonset
template:
metadata:
labels:
name: nginx-daemonset
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: nginx
image: nginx
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: nginx2-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx2-conf
configMap:
name: nginx2.conf
I deployed this Daemonset using kubectl apply -f nginx-daemonset-deployment.yml but my newly created Pod is crashing with the following error:
Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "/var/lib/kubelet/pods/cd9f6f7b-31db-4ab3-bbc0-189e1d392979/volume-subpaths/nginx2-conf/nginx/0" to rootfs at "/var/lib/docker/overlay2/b21ccba23347a445fa40eca943a543c1103d9faeaaa0218f97f8e33bacdd4bb3/merged/etc/nginx/nginx.conf" caused: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I did another Deployment with different nginx config before and everything worked fine, so the problem is probably somehow related to DaemonSet.
Please, how do I get over this error and mount the config file properly?
first create your config file as a ConfigMap like nginx-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
envfile: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
then create your DaemonSet, volumes, configMap and finial y mount volumeMounts with subPath:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
name: nginx-vol
volumes:
- name: nginx-vol
configMap:
name: nginx-conf
items:
- key: envfile
path: nginx.conf
note that for file mounting instead of directory mounting you must use "path in configMap" and "subPath in volumeMounts".

Resources