how to reslove given error in spinnakers ?spinnaker deployed on kubernetes,and 8084 and 9000 are ports i gave. error came on hal deploy connect - spinnaker-halyard

Unable to listen on port 8084: All listeners failed to create with the follow ing errors: Unable to create listener: Error listen tcp4 127.0.0.1:8084: bind : address already in use, Unable to create listener: Error listen tcp6 [::1]: 8084: bind: cannot assign requested address
error: Unable to listen on any of the requested ports: [{8084 8084}]
Unable to listen on port 9000: All listeners failed to create with the follow ing errors: Unable to create listener: Error listen tcp4 127.0.0.1:9000: bind : address already in use, Unable to create listener: Error listen tcp6 [::1]: 9000: bind: cannot assign requested address
error: Unable to listen on any of the requested ports: [{9000 9000}]
! ERROR Error encountered running script. See above output for more
details.

I think it's port issue used by some another process please share your yaml file with more information.
Here sharing reference file
apiVersion: v1
kind: Namespace
metadata:
name: spinnaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spinnaker-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: spinnaker
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: halyard-pv-claim
namespace: spinnaker
labels:
app: halyard-storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: spin-halyard
namespace: spinnaker
labels:
app: spin
stack: halyard
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: spin
stack: halyard
template:
metadata:
labels:
app: spin
stack: halyard
spec:
securityContext:
runAsGroup: 1000
fsGroup: 1000
containers:
- name: halyard-daemon
# todo - make :stable or digest of :stable
image: gcr.io/spinnaker-marketplace/halyard:stable
imagePullPolicy: Always
command:
- /bin/sh
args:
- -c
# We persist the files on a PersistentVolume. To have sane defaults,
# we initialise those files from a ConfigMap if they don't already exist.
- "test -f /home/spinnaker/.hal/config || cp -R /home/spinnaker/staging/.hal/. /home/spinnaker/.hal/ && /opt/halyard/bin/halyard"
readinessProbe:
exec:
command:
- wget
- -q
- --spider
- http://localhost:8064/health
ports:
- containerPort: 8064
volumeMounts:
- name: persistentconfig
mountPath: /home/spinnaker/.hal
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/config
subPath: config
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/deck.yml
subPath: deck.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/gate.yml
subPath: gate.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/igor.yml
subPath: igor.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/fiat.yml
subPath: fiat.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/service-settings/redis.yml
subPath: redis.yml
- name: halconfig
mountPath: /home/spinnaker/staging/.hal/default/profiles/front50-local.yml
subPath: front50-local.yml
volumes:
- name: halconfig
configMap:
name: halconfig
- name: persistentconfig
persistentVolumeClaim:
claimName: halyard-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: spin-halyard
namespace: spinnaker
spec:
ports:
- port: 8064
targetPort: 8064
protocol: TCP
selector:
app: spin
stack: halyard
---
apiVersion: v1
kind: ConfigMap
metadata:
name: halconfig
namespace: spinnaker
data:
igor.yml: |
enabled: false
skipLifeCycleManagement: true
fiat.yml: |
enabled: false
skipLifeCycleManagement: true
front50-local.yml: |
spinnaker.s3.versioning: false
gate.yml: |
host: 0.0.0.0
deck.yml: |
host: 0.0.0.0
env:
API_HOST: http://spin-gate.spinnaker:8084
redis.yml: |
overrideBaseUrl: redis://spin-redis:6379
config: |
currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.12.2
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: false
accounts: []
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: my-kubernetes-account
requiredGroupMembership: []
providerVersion: V2
dockerRegistries: []
configureImagePullSecrets: true
serviceAccount: true
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
oAuthScopes: []
primaryAccount: my-kubernetes-account
oraclebmcs:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: my-kubernetes-account
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
gitConfig:
upstreamUser: spinnaker
persistentStorage:
persistentStoreType: s3
azs: {}
gcs:
rootFolder: front50
redis: {}
s3:
bucket: spinnaker-artifacts
rootFolder: front50
endpoint: http://minio-service.spinnaker:9000
accessKeyId: dont-use-this
secretAccessKey: for-production
oraclebmcs: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
jobs: false
artifacts: true
metricStores:
datadog:
enabled: false
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: /gate
uiSecurity:
ssl:
enabled: false
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
ldap:
enabled: false
x509:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
enabled: false
artifacts:
gcs:
enabled: false
accounts: []
github:
enabled: false
accounts: []
http:
enabled: false
accounts: []
pubsub:
google:
enabled: false
subscriptions: []
canary:
enabled: true
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: signalfx
enabled: false
accounts: []
- name: newrelic
enabled: false
accounts: []
- name: aws
enabled: true
accounts:
- name: kayenta-minio
bucket: spinnaker-artifacts
rootFolder: kayenta
endpoint: http://minio-service.spinnaker:9000
accessKeyId: dont-use-this
secretAccessKey: for-production
supportedTypes:
- CONFIGURATION_STORE
- OBJECT_STORE
s3Enabled: true
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
namespace: spinnaker
labels:
app: minio-storage-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
namespace: spinnaker
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: minio
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
containers:
- name: minio
image: minio/minio
args:
- server
- /storage
env:
- name: MINIO_ACCESS_KEY
value: "dont-use-this"
- name: MINIO_SECRET_KEY
value: "for-production"
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: /storage
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
namespace: spinnaker
spec:
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
---
apiVersion: batch/v1
kind: Job
metadata:
name: hal-deploy-apply
namespace: spinnaker
labels:
app: job
stack: hal-deploy
spec:
template:
metadata:
labels:
app: job
stack: hal-deploy
spec:
restartPolicy: OnFailure
containers:
- name: hal-deploy-apply
# todo use a custom image
image: gcr.io/spinnaker-marketplace/halyard:stable
command:
- /bin/sh
args:
- -c
- "hal deploy apply --daemon-endpoint http://spin-halyard.spinnaker:8064"

Related

What is the correct way of mounting appsettings file for .Net Core Worker Service?

I have a .Net Worker service that runs as a K8S Cronjob but when it starts up it is failing to mount the appsettings file. The pod remains stuck in CrashLoopBackoff error state and the logs have the following :
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/axxxxxx-1xxxx-4xxx-8xxx-4xxxxxxxxxx/volume-subpaths/secrets/ftp-client/1"
to rootfs at "/app/appsettings.ftp.json" caused: mount through procfd: not a directory: unknown
In the deployment I have mounted the appsettings file as follows :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ftp-client
spec:
schedule: "*/6 * * * *" #Cron job everyday 6 minutes
# startingDeadlineSeconds: 60
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: taat
topologyKey: "kubernetes.io/hostname"
initContainers:
- name: ftp-backup
image: registry.xxx.com/xxx/xxx:latest-ftp
imagePullPolicy: "Always"
env:
- name: WEB_URL
valueFrom:
secretKeyRef:
key: url
name: web-url
volumeMounts:
- mountPath: /tmp
name: datadir
command: ['sh', '-c',"./myscript.sh"]
containers:
- name: ftp-client
image: registry.xxx.com/xxx/xxx:latest-ftp
imagePullPolicy: "Always"
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 128Mi
volumeMounts:
- mountPath: /tmp
name: datadir
- mountPath: /app/appsettings.ftp.json
subPath: appsettings.ftp.json
name: secrets
env:
- name: DOTNET_ENVIRONMENT
value: "Development"
- name: DOTNET_HOSTBUILDER__RELOADCONFIGONCHANGE
value: "false"
restartPolicy: OnFailure
imagePullSecrets:
- name: mycredentials
volumes:
- name: datadir
persistentVolumeClaim:
claimName: labs
- name: secrets
secret:
secretName: ftp-secret
And Program.cs for the Worker Service
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using FtpClientCron;
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
})
.Build();
await host.RunAsync();
appsettings.ftp.json
{
"ApplicationSettings": {
"UserOptions": {
"Username": "xxxxxx",
"Password": "xxxxxxxxx",
"Url": "xxx.xxx.com",
"Port": "xxxx"
}
}
}
appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.Hosting.Lifetime": "Information"
}
}
}
Dockerfile
FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app
COPY ./publishh .
ENTRYPOINT ["dotnet", "SftpClientCron.dll"]
What am I missing ?

Read only file system error (EFS as persistent storage in EKS)

----Storages files----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: aws.io/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-claim
namespace: dev
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
--------Deployment file--------------------
apiVersion: v1
kind: ServiceAccount
metadata:
name: efs-provisioner
namespace: dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: efs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: efs-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
namespace: dev
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: efs-provisioner
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: efs-provisioner
template:
metadata:
labels:
app: efs-provisioner
spec:
serviceAccount: efs-provisioner
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: aws.region
- name: DNS_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: dns.name
optional: true
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /efs-mount
volumes:
- name: pv-volume
nfs:
server: <File-system-dns>
path: /
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner-config
namespace: dev
data:
file.system.id: <File-system-id>
aws.region: us-east-2
provisioner.name: aws.io/aws-efs
dns.name: ""
------release file----
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: airflow
namespace: dev
annotations:
flux.weave.works/automated: "true"
spec:
releaseName: airflow-dev
chart:
repository: https://airflow.apache.org
name: airflow
version: 1.6.0
values:
fernetKey: <fernet-key>
defaultAirflowTag: "2.3.0"
env:
- name: "AIRFLOW__KUBERNETES__DAGS_IN_IMAGE"
value: "False"
- name: "AIRFLOW__KUBERNETES__NAMESPACE"
value: "dev"
value: "apache/airflow"
- name: "AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG"
value: "latest"
- name: "AIRFLOW__KUBERNETES__RUN_AS_USER"
value: "50000"
- name: "AIRFLOW__CORE__LOAD_EXAMPLES"
value: "False"
executor: "KubernetesExecutor"
dags:
persistence:
enabled: true
size: 20Gi
storageClassName: aws-efs
existingClaim: efs-claim
accessMode: ReadWriteMany
gitSync:
enabled: true
repo: git#bitbucket.org: <git-repo>
branch: master
maxFailures: 0
subPath: ""
sshKeySecret: airflow-git-private-dags
wait: 30
When im going to the scheduler pod, and going to directory /opt/airflow/dags$ , im get the read only file system error. But when i did "df -h", i can see that the file system is mounted on the pod. But i get read only error.
Kubectl get pv -n dev
This gives me the PV has "RWX" access and it shows that it has been mounted to my airflow trigger and airflow scheduler pods

Azure AKS Let's Encrypt - "Issuing certificate as Secret does not exist"

I have followed Microsoft tutorial to setup inggress but cannot issue valid SSL certificate with cert-manager. Below are describe for Ingress, ClusterIssuer and Certificate. Posted are also created by the cluster issuer, Order and 'Challenge`
Name: erpdeploymenttripletex-ingress
Namespace: tripletex
Address: 20.223.184.33
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates otterlei.northeurope.cloudapp.azure.com
Rules:
Host Path Backends
---- ---- --------
otterlei.northeurope.cloudapp.azure.com
/estataerpiconnectorapi estataconnservice:80 (10.244.1.150:8080)
/(.*) estataconnservice:80 (10.244.1.150:8080)
Annotations: acme.cert-manager.io/http01-edit-in-place: true
cert-manager.io/cluster-issuer: letsencrypt-staging
cert-manager.io/issue-temporary-certificate: true
kubernetes.io/ingress.class: tripletex
meta.helm.sh/release-name: erpideploymenttripletexprod
meta.helm.sh/release-namespace: tripletex
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
Name: letsencrypt-staging
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cert-manager.io/v1alpha2","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-staging"},"spec":{"acme":...
API Version: cert-manager.io/v1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2022-03-11T08:31:50Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1alpha2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:acme:
.:
f:email:
f:privateKeySecretRef:
.:
f:name:
f:server:
f:solvers:
Manager: kubectl.exe
Operation: Update
Time: 2022-03-11T08:31:50Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acme:
.:
f:lastRegisteredEmail:
f:conditions:
Manager: controller
Operation: Update
Time: 2022-03-11T08:31:51Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acme:
f:uri:
Manager: controller
Operation: Update
Time: 2022-03-14T13:23:16Z
Resource Version: 192224854
UID: 5ef69bfc-f3a9-4bd2-8520-e390adbd1763
Spec:
Acme:
Email: penko.yordanov#icb.bg
Preferred Chain:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
http01:
Ingress:
Class: nginx
Pod Template:
Metadata:
Spec:
Node Selector:
kubernetes.io/os: linux
Status:
Acme:
Last Registered Email: penko.yordanov#icb.bg
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/47169398
Conditions:
Last Transition Time: 2022-03-11T08:31:51Z
Message: The ACME account was registered with the ACME server
Observed Generation: 1
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
Name: tls-secret
Namespace: tripletex
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cert-manager.io/v1","kind":"Certificate","metadata":{"annotations":{},"name":"tls-secret","namespace":"tripletex"},"spec":{...
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2022-03-16T09:37:39Z
Generation: 1
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:conditions:
Manager: controller
Operation: Update
Time: 2022-03-16T09:37:39Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:nextPrivateKeySecretName:
Manager: controller
Operation: Update
Time: 2022-03-16T09:37:39Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:secretName:
Manager: kubectl.exe
Operation: Update
Time: 2022-03-16T09:37:39Z
Resource Version: 193021094
UID: e1da4438-952b-4df0-a141-1a3d29e5e9b9
Spec:
Dns Names:
otterlei.northeurope.cloudapp.azure.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: tls-secret
Status:
Conditions:
Last Transition Time: 2022-03-16T09:37:39Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 1
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2022-03-16T09:37:39Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 1
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: tls-secret-kxkhf
Events: <none>
Order
Name: tls-secret-fxpxl-1057960237
Namespace: tripletex
Labels: <none>
Annotations: cert-manager.io/certificate-name: tls-secret
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: tls-secret-kxkhf
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cert-manager.io/v1","kind":"Certificate","metadata":{"annotations":{},"name":"tls-secret","namespace":"tripletex"},"spec":{...
API Version: acme.cert-manager.io/v1
Kind: Order
Metadata:
Creation Timestamp: 2022-03-16T09:37:40Z
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:finalizeURL:
f:state:
f:url:
Manager: controller
Operation: Update
Time: 2022-03-16T09:37:40Z
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:authorizations:
Manager: controller
Operation: Update
Time: 2022-03-16T09:37:40Z
Owner References:
API Version: cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: CertificateRequest
Name: tls-secret-fxpxl
UID: 6ec06c5a-8bd7-49a0-90a5-7d71b796f236
Resource Version: 193021106
UID: 50539071-d3ed-4d79-a2f6-6fcc79f0d41b
Spec:
Dns Names:
otterlei.northeurope.cloudapp.azure.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2x6Q0NBWDhDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5ZNQoxUHlqWmhuNnJNbUVUVnBvK0JpWEJGbFAwS0tUajJKYXZGVnhiWXJXV1BxWDdlZzBLUUI0U2xrYVlMK09IcE5tCmFqVXNOWGhGZ2pxc2s5Z2FIWnJuMS9uS284ZWRnSWxLc00vdVFrQ2tnZHQvaXMwOHN5cGxlN3dhMWVkOFNzZCsKMzhXd1ZUaUloNHFPdVRXajJIenRhbUpRNStGcWRidHJZUE5HaTNwakNBcDE0N0RWZG9xRjN0ZkQ2VTRlZjRBMQp1TnN3VFhtVU1tb2wvVlhxYmxSOWxLdmplczFSTjV5N0o4aFBKZGtEMFVtYVFXbkVUSE9tY1A1Lzk3bjBDbzdrCk1CVzR5TkoyNDJmSzAxYnJTRWx3d08rL1hkWXFSNVpQQVp3QWoxRjF6Y3hrZGs2azIrWmlpcmk3Z0U0enVJTjYKRmJLbmhOOGE3dEZHS3VYNUtzOENBd0VBQWFCU01GQUdDU3FHU0liM0RRRUpEakZETUVFd01nWURWUjBSQkNzdwpLWUluYjNSMFpYSnNaV2t1Ym05eWRHaGxkWEp2Y0dVdVkyeHZkV1JoY0hBdVlYcDFjbVV1WTI5dE1Bc0dBMVVkCkR3UUVBd0lGb0RBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXJ3ZXFvalRocUswMEFJVFBiRUhBSk5nZk9kcmQKbVF3MTZaeXQ1a1J4Uk1Cc2VsL1dURGhzY0Q0bklqQWtpVzA2akluMUhOanVCNm1WdVprU0RnRVVLZG15bUJEUgpTcFQvSWtuWkZTci9mWkxFWXNjUnRKcTFFVmhoaTR1bG5ZUnptclkwQ3VsMGVKZzNOYitzZmxJanZMZVQ1N05mClphK3RleXZFSGpMOWVjNEVUbVRRamIxNUdaK3lKZkx6SjA4QU1Qd1JSZkFhYzBkc2RyR0Z3VEF3TGc3MWlTdnMKc3lVdmJBNzQ5T3JlOXZvcko5cjdNQk1mSXBKOXYwTGQwL3IzV1NHSXBkbko2WE1GU28wdGlOZDJlRXFxbDRBMgpEamV2YjVnVnJRTkNnNCtGQzlxbXNLeDJFR2w5MlFNQ0h3WSsrOVdteWIxTmtBbG9RSkZhN3ZIUEFnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
Status:
Authorizations:
Challenges:
Token: W7zdK6beQBcAPTSTrc_6Mv_wiDknSgh3i1XKb617Nos
Type: http-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/1913552008/KocZGw
Token: W7zdK6beQBcAPTSTrc_6Mv_wiDknSgh3i1XKb617Nos
Type: dns-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/1913552008/x0hWcg
Token: W7zdK6beQBcAPTSTrc_6Mv_wiDknSgh3i1XKb617Nos
Type: tls-alpn-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/1913552008/Hidh4g
Identifier: otterlei.northeurope.cloudapp.azure.com
Initial State: pending
URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/1913552008
Wildcard: false
Finalize URL: https://acme-staging-v02.api.letsencrypt.org/acme/finalize/47169398/2042532738
State: pending
URL: https://acme-staging-v02.api.letsencrypt.org/acme/order/47169398/2042532738
Events: <none>
challenge
Name: tls-secret-fxpxl-1057960237-691767986
Namespace: tripletex
Labels: <none>
Annotations: <none>
API Version: acme.cert-manager.io/v1
Kind: Challenge
Metadata:
Creation Timestamp: 2022-03-16T09:37:40Z
Finalizers:
finalizer.acme.cert-manager.io
Generation: 1
Managed Fields:
API Version: acme.cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"finalizer.acme.cert-manager.io":
f:ownerReferences:
.:
k:{"uid":"50539071-d3ed-4d79-a2f6-6fcc79f0d41b"}:
f:spec:
.:
f:authorizationURL:
f:dnsName:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:key:
f:solver:
.:
f:http01:
.:
f:ingress:
.:
f:class:
f:podTemplate:
.:
f:metadata:
f:spec:
.:
f:nodeSelector:
.:
f:kubernetes.io/os:
f:token:
f:type:
f:url:
f:wildcard:
Manager: controller
Operation: Update
Time: 2022-03-16T09:37:40Z
Owner References:
API Version: acme.cert-manager.io/v1
Block Owner Deletion: true
Controller: true
Kind: Order
Name: tls-secret-fxpxl-1057960237
UID: 50539071-d3ed-4d79-a2f6-6fcc79f0d41b
Resource Version: 193021107
UID: 665341e0-2745-48c2-a985-166e58646d44
Spec:
Authorization URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/1913552008
Dns Name: otterlei.northeurope.cloudapp.azure.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Key: W7zdK6beQBcAPTSTrc_6Mv_wiDknSgh3i1XKb617Nos.PeCQyw56kTw4k7brocD-LfWP2NllTueut46pJ7EU2yw
Solver:
http01:
Ingress:
Class: nginx
Pod Template:
Metadata:
Spec:
Node Selector:
kubernetes.io/os: linux
Token: W7zdK6beQBcAPTSTrc_6Mv_wiDknSgh3i1XKb617Nos
Type: HTTP-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/1913552008/KocZGw
Wildcard: false
Events: <none>
The message "Issuing certificate as Secret does not exist" is ok as the secret with the cert does not exist.
Can you try this config:
Cluster issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
namespace: cert-manager
spec:
acme:
email: EMAIL
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: issuer-key
solvers:
- http01:
ingress:
class: nginx
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
rules:
- host: YOUR_URL
http:
paths:
- backend:
service:
name: DEMO
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- YOUR_URL
secretName: YOUR_URL

Why the certificate is not recognized by the ingress?

I have installed on my K8S https://cert-manager.io and have created cluster issuer:
apiVersion: v1
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager
data:
# insert your DO access token here
access-token: secret
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: mail#example.io
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
#- "*.service.databaker.io"
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: mail#example.io
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
also have created a certificate:
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: hello-cert
spec:
secretName: hello-cert-prod
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.tool.databaker.io"
dnsNames:
- "*.tool.databaker.io"
and it was successfully created:
Normal Requested 8m31s cert-manager Created new CertificateRequest resource "hello-cert-2824719253"
Normal Issued 7m22s cert-manager Certificate issued successfully
To figure out, if the certificate is working, I have deployed a service:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: hello.tool.databaker.io
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
But it does not work properly.
What am I doing wrong?
You haven't specified the secrets containing your certificate:
spec:
tls:
- hosts:
- hello.tool.databaker.io
secretName: <secret containing the certificate>
rules:
...

Problem with Kubernetes and Nginx. Error code

I'm trying to deploy my first Kubernetes application. I've set up everyting but now when I try to acces it over the clusters IP adress I get this message:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Anybody knows what could be the problem? Does it has anything to do with NGNIX?
Also here is my .yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
namespace: gitlab-managed-apps
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
I would really appreciate it if anybody could help me!
Just add the path in you ingress:
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}
servicePort: 80
path: /
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource

Resources