date and time synchronization among the pods and host in kubernetes - datetime

I am having issue with date and time in kubernetes cluster. Cluster is setup in the data center using the kubeadm. My host server time is sync using the NTP, though i have synced it after configuring the cluster. Now all the pods created within my cluster will have wrong time. So the cause for it seems to be the docker taking the UTC timezone. For the temporary solution, i use volume mount /etc/localtime with the hostmeachine in the pods which we create but it seems not feasible for the application i install using helm from helm repo. Is there any way to fix this issue? I don't want every pods have the volume mounts for the correct time. Is there any way through which the docker gets timezone from the host machine.
FYI the k8s cluster is setup upon the CentOS 7. They are VM created over the EXSi. Thank You

It's not broken. It's working as designed.
Clock in a container is the same as on the host machine because it’s controlled by the kernel of that machine.
Timezone is controlled by the OS layer so it may be different inside the container.
The way around it is using specific timezone config and hostPath volume to set specific timezone.
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
Because you are using helm, you should check the documentation for the image you are using and look for a timezone variable that you could change so you can put that in your value.yaml or use --set option when deploying.
I recommend reading Kubernetes Container Timezone Management.

Related

Kubernetes Ingress doesnt find/expose the application properly

I have one application on two environments, its been running for well over a year and now had to re-deploy it on one env and im left with half working external traffic.
example of working up
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my-app prod-app.my.domain <public ip e.g 41.30.20.20 . 80, 443 127d
and the not working one
MacBook-Pro% kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
my-app dev-app.my.domain <for some reason priv addresses not public that I assigned?> 10.223.0.76,10.223.0.80,10.223.0.81,10.223.0.99 80, 443 5m5s
the deployments works like so, in helm I have the deployments,service etc. + kubernetes ingress resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.deployment.name }}
namespace: {{ .Values.deployment.env }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
<some other annotatioins>
spec:
tls:
- secretName: {{ .Values.ingress.tlsSecretName.Games }}
rules:
- host: [prod,dev]-app.my.domain
http:
paths:
- path: /
backend:
serviceName: my-app
servicePort: {{ .Values.service.port }}
and before it I deployed the stable/nginx-ingress helm chart (yup, i know there is ingress-nginx/ingress-nginx - will migrate to it soon, but first want to bring back the env)
and the simple nginx config
controller:
name: main
tag: "v0.41.2"
config:
log-format-upstream: ....
replicaCount: 4
service:
externalTrafficPolicy: Local
updateStrategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 25% #max number of Pods can be unavailable during the update
type: RollingUpdate
# We want to disperse pods into the whole cluster, on each data node
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
# app label is set in the main deployment manifest
# https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-deployment.yaml#L6
values:
- nginx-ingress
- key: release
operator: In
values:
- my-app-ingress
topologyKey: kubernetes.io/hostname
any idea why my kubernetes ingress has private addresses not the assigned public one ?
and my services on prod are
my-app NodePort 10.190.173.152 <none> 8093:32519/TCP 127d 127d
my-app-ingress-stg-controller LoadBalancer 10.190.180.54 <PUB_IP> 80:30111/TCP,443:30752/TCP 26d
and on dev
my-app NodePort 10.190.79.119 <none> 8093:30858/TCP 10m
my-app-ingress-dev-main LoadBalancer 10.190.93.104 <PUB_IP> 80:32027/TCP,443:30534/TCP 10m
I kinda see the problem (cause I already tried migrating to new nginx a month ago, and on dev there is still old, but there were issues with having multiple envs on same dev cluster with ingresses) - I guess ill try to migrate to the new one and see if that somehow fixes the issue - other than that any idea why the priv addresses ?
Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was
Reserve public IP in gcp (my cloud provider)
Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
Deploy nginx-ingress pointing to that public address from the domain
if there is any mistake in my logic please say so and ill update it
#potatopotato I have just moved you own answer from initial question to community wiki separate answer. In that case it will be more searchable and indexing in
future searches.
Explanation regarding below
Not sure how it works but I deployed ingress (nginx-ingress helm
chart) after deploying the application helm chart and at first all
pods were 1/1 ready, and site didnt responde, and after ~10min it did
so ¯_(ツ)_/¯ no idea why it took so long
As per official documentation:
Note: It might take a few minutes for GKE to allocate an external IP address and prepare the load balancer. You might get errors like HTTP 404 and HTTP 500 until the load balancer is ready to serve the traffic.
your answer itself
Not sure how it works but I deployed ingress (nginx-ingress helm chart) after deploying the application helm chart and at first all pods were 1/1 ready, and site didnt responde, and after ~10min it did so ¯_(ツ)_/¯ no idea why it took so long, as for future reference what I did was
Reserve public IP in gcp (my cloud provider)
Create A record on where my domain is registered godaddy etc. to pin-point to that pub address from step 1
Deploy app helm chart with ingress in it, with my domain and ssl-cert in it, and kubernetes service (load balancer) having that public IP
Deploy nginx-ingress pointing to that public address from the domain

Deploying Kubernetes Cluster + Wordpress on Ubuntu 18.04 VPS pointing to my domain

Scenario:
I'm trying to learn some K8s by re-creating my current stack with containers and container orchestration.
Background:
My current dev stack consists of Wordpress, NextCloud, BTCPayServer, Jitsi and Mail-in-a-box. Everything is OpenSource and running just fine each one in its own separate KVM VPS.
Objectives:
My goal is to be able to deploy the same solutions on K8s in order to be able to scale and gain a bit more flexibility while developing.
Limitations:
My budget is limited so I work with KVM VPS in small hosting providers (no AWS/GCE/Azure/DO/etc).
Current K8S setup is 1 Domain with 1 Master + 2 Workers:
- Master 1: 2xCPU, 2gb RAM, 50gb SSD, Master01_IP
- Worker 1: 1xCPU, 1gb RAM, 10gb SSD, Worker01_IP
- Worker 2: 2xCPU, 3gb RAM, 60gb SSD, Worker02_IP
Somehow I managed to get the cluster working, I edited /etc/hosts with all 3 IPs in each server, run the master and then joined the two workers.
Then installed Wordpress by executing:
helm install wordpress-test bitnami/wordpress
I get stuck in EXTERNAL-IP "pending"
$ kubectl get svc --namespace default -w wordpress-test
wordpress-test LoadBalancer 10.104.15.90 "pending" 80:32577/TCP,443:31388/TCP 102s
terminal-screenshot
How do I expose the deployment EXTERNAL-IP to some of my available IP (1-Master, 2 Workers) so I can get access to it by going to mydomain.xyz?
I've read about LoadBalancers but most documentation makes reference to big cloud providers such as AWS, GCE, Azure, DigitalOcean, all are out of my scope since I don't even have a credit card to register and make an account.
I need to be able to learn how to deploy it with my own resources so here I'm asking for some help :)
You can deploy an ingress controller such as nginx and expose the ingress controller via NodePort.Then use an ingress resource to expose the service via the nginx.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: rewrite
namespace: default
spec:
rules:
- host: mydomain.xyz
http:
paths:
- backend:
serviceName: coffee-svc
servicePort: 80
Since the domain mydomain.xyz is not a real domain registered with a DNS provider you could just modify the /etc/hosts file of the system from where you would access it to have the mydomain.xyz to NodeIP mapping.
Another option would be to use metalLB which is an implementation of LoadBalancer for bare metal without a cloud provider.

Expose an external service to be accessible from within the cluster

I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.
I have read various tutorials including
https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services
and multiple SO questions though the problem persists.
Here is an example configuration with which I am trying to setup kafka connectivity:
---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses:
- ip: 10.132.0.5
ports:
- port: 9092
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
I am able to get some sort of response by connecting directly via nc 10.132.0.5 9092 from the node VM itself, but if I create a pod, say by kubectl run -it --rm --restart=Never alpine --image=alpine sh then I am unable to connect from within the pod using nc kafka 9092. All libraries in my code fail by timing out so it seems to be some kind of routing issue.
Kafka is given as an example, I am having the same issues connecting to other databases as well.
Solved it, the issue was within my understanding of how GCP operates.
To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was 10.52.0.0/24 address range.
Hope it helps someone.

Automation own job through Jenkins And publish over Kubernetes through HTTPD or NGINX

I have my some files which is in react lng. I am doing build from npm.I am doing locally this build.I have the build path.
I want to deploy this build to pods of the kubernetes.
how to write the deployment.yaml?
how to configure my nginx or httpd root folder which can publish my codes?
If first i have to make the docker image of that project file , then how?
First you have to create Dockerfile:
Egg. Dockerfile:
FROM golang
WORKDIR /go/src/github.com/habibridho/simple-go
ADD . ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix .
EXPOSE 8888
ENTRYPOINT ./simple-go
Build your image and try to run it.
$ docker build -t simple-go .
$ docker run -d -p 8888:8888 simple-go
Next step is transferring image to the server. You can use Docker Hub. You can push image to the repository and pull it from the server.
-- on local machine
$ docker tag simple-go habibridho/simple-go
$ docker push habibridho/simple-go
-- on server
$ docker pull habibridho/simple-go
You have to note that the default docker repository visibility is public, so if your project is private, you need to change the project visibility from the Docker Hub website.
Useful information about that process you can find here:
docker-images
Once we have the image in our server, we can run the app just like what we have done in our local machine by creating deployment.
The following is an example of a Deployment. It creates a ReplicaSet to bring up three your app Pods:
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
spec:
containers:
- name: your-app
image: your-app:version
ports:
- containerPort: port
In this example:
A Deployment named your-deployment is created, indicated by the .metadata.name field.
The Deployment creates three replicated Pods, indicated by the replicas field.
The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: your-app).
However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.
To create Deployment, run the following command:
$ kubectl create -f your_deployment_file_name.yaml
More information you can find here: kubernetes-deployment.

Can nginx.conf access environment variables?

I'm trying to run a docker container with nginx on a kubernetes cluster. I'm using the environment variable service discovery for all my other containers, so I would like to keep it consistent and not have to bring something like skydns into the mix just because of this. Is it possible to access environment variables in nginx such that I can tell it to proxy-pass to a kubernetes service?
How about this the shell script below which is run by a Docker container?
https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh ?
You mean use the value of an env var set in this way in a config file for nginx? One thing I have done in the past is to have a run.sh config script that is run by the Docker container which uses the env variable to effect substation in template file for an nginx config -- is that would you mean?
There were tons of issues with doing the hacky HEREDOC, including it only having one time service discovery (not much better than hard coding). So my solution ended up being using confd to template nginx and restart nginx when the environment variables change. Here's the link to confd: https://github.com/kelseyhightower/confd
Keeping an included config file in the ConfigMap mounted as volume should work too.
You might need to change the config files' structure for that though.
In the spec you can define an environment variable e.g.
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:1.3
livenessProbe:
name: kibana-liveness
httpGet:
path: /
port: 5601
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: kibana-port
protocol: TCP
This will cause the environment variable ELASTICSEARCH_URL to be set to http://elasticsearch-logging:9200. Will this work for you?
Cheers,
Satnam

Resources