Deploy docker image to k8s cluster issue - nginx

I am trying to deploy docker image to kubernetes but hitting a strange issue. Below is the command i am using in jenkinsfile
stage('Deploy to k8s') {
steps{
sshagent(['kops-machine']) {
sh "scp -o StrictHostKeyChecking=no deployment.yml ubuntu#<ip>:/home/ubuntu/"
sh "ssh ubuntu#<ip> kubectl apply -f ."
sh 'kubectl set image deployment/nginx-deployment nginx=account/repo:${BUILD_NUMBER}'
}
}
I am getting this error message
kubectl set image deployment/nginx-deployment nginx=account/repo:69
error: the server doesn't have a resource type "deployment"
Strange thing is if i copy and paste this command and execute on the cluster, the image gets updated
kubectl set image deployment/nginx-deployment nginx=account/repo:69
Can somebody please help, image builds and pushes to docker hub successfully, its just that i am stuck with pulling and deploying to kubernetes cluster, if you have anyother alternatives please let me know, the deployment.yml file which gets copied to the server is as follows
spec:
containers:
- name: nginx
image: account/repo:3
ports:
- containerPort: 80

Ok so if found the work around. if i change this line in my docker file
sh "ssh ubuntu#<ip> kubectl apply -f ." to
sh "ssh ubuntu#<ip> kubectl set image deployment/nginx-deployment
nginx=account/repo:${BUILD_NUMBER}"
It works, but if there is no deployment created at all, then i have to add these two line to make it work
sh "ssh ubuntu#<ip> kubectl apply -f ."
sh "ssh ubuntu#<ip> kubectl set image deployment/nginx-deployment
nginx=account/repo:${BUILD_NUMBER}"

Related

nginx image behaves differently in kubectl deployment and pods

I am using nginx image to create pods as follows -
$ kubectl run nginx --image=nginx --port=80 -- /bin/sh -c 'sleep 20000'
$ kubectl create deployment nginx-deploy --image=nginx --port=80 --replicas=1
These results two pods as follows -
$ kubectl get pods
nginx 1/1 Running 0 24s
nginx-deploy-7496796997-wkhv8 1/1 Running 0 19s
the curl connects to localhost in the "nginx-deploy" pod whereas in other pod it does not.
$ kubectl exec -it nginx -- /bin/sh -c 'curl localhost'
curl: (7) Failed to connect to localhost port 80: Connection refused
$ k exec -it nginx-deploy-7496796997-wkhv8 -- /bin/sh -c 'curl localhost'
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
....
Any reason nginx image is behaving differently in these two pods?
# Here command = /bin/sh -c 'sleep 20000'
kubectl run nginx --image=nginx --port=80 -- command
When you create a pod using the command, it will overwrite command in Dockerfile. So that nginx server will not be started in the pod. Please refer to the Dockerfile of nginx.
Have not defined COMMAND and ARGS in Kubernetes, use Dockerfile's config.
Have defined COMMAND (no effect if ARGS defined or not) in Kubernetes, use Kubernetes COMMAND.
Have not defined COMMAND but ARGS in Kubernetes, use ENTRYPOINT in Dockerfile and ARGS in Kubernetes.
In summary, Kubernetes has higher priority.
/bin/sh -c 'sleep 20000'
Your command has overridden the default CMD/ENTRYPOINT defined in nginx image. As a result, the nginx server will not start. If you kubectl run again without your command, the nginx will run like the deployment.

Logs via qDebug() are invisible when run via docker-compose

I have written a Qt application which logs to console via qDebug(). When run inside a docker container, the application logs are visible as normal. But when the same docker image is run via docker-compose up, there is no output visible at all. How does this come?
Edit:
The output is not visible either, if I try to view it via docker logs in the following way:
docker run -d --rm -name test test-image
docker logs test
Working:
docker run -it --rm test-image
I finally found a solution. My docker run was missing the -t flag:
docker run -it --rm -t test-image
The equivalent option for the docker-compose config is:
tty: true
Hope this is helpfull to someone.

Linking 2 docker containers in chef recipe

I am trying to achieve what the command does using chef recipe:
docker run -d --name=nginx --restart=unless-stopped -p 80:80 -p 443:443 -v /etc/test/test.cert:/etc/test/test.cert -v /etc/test/test.key:/etc/test/test.key -v /etc/nginx/conf.d/nginx_ssl_conf.conf:/etc/nginx/conf.d/default.conf --link=rancher-server nginx
This is what I have come up with so far. But I am still unable to link the two containers:
docker_image 'nginx' do
tag 'latest'
action :pull
end
docker_container 'my_nginx' do
repo 'nginx'
tag 'latest'
port ['80:80', '443:443']
volumes [ '/etc/test/test.cert:/etc/test/test.cert', '/etc/test/test.key:/etc/test/test.key', '/etc/nginx/conf.d/nginx_ssl_conf.conf:/etc/nginx/conf.d/default.conf' ]
links ['rancher-server:nginx']
subscribes :run, 'docker_image[nginx]'
end
Any thoughts ? suggestions ?
There is a links property which takes an array of links. There is an example in the README if you search for "Manage container links".

How to properly start nginx in Docker

I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>

Docker nginx container exists instantly

I want to have some control over the official nginx image, so I wrote my own Dockerfile that adds some extra funtionality to it.
The file has the following contents:
FROM nginx
RUN mkdir /var/www/html
COPY nginx/config/global.conf /etc/nginx/conf.d/
COPY nginx/config/nginx.conf /etc/nginx/nginx.conf
When I build this image and create a container of the image using this command:
docker run -it -d -v ~/Projects/test-website:/var/www/html --name test-nginx my-nginx
It will exit instantly. I can't access the log files as well. What could be the issue? I've copied the Dockerfile of the official nginx image and this does the same thing.
So I didn't know about the docker ps -a; docker logs <last container id> command. I executed this and it seemed I had a duplicated daemon off; command.
Thanks for the help guys ;)!

Resources