How to change server in admin.conf file in kubernetes? - nginx

I have a single node c=kubernetes cluster and I'm untainting the master node. I'm using the following command to create my cluster.
#Only on the Master Node: On the master node initialize the cluster.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#Add pod network add-on
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
# Ref: https://github.com/calebhailey/homelab/issues/3
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes
kubectl get pods --all-namespaces
#for nginx
kubectl create deployment nginx --image=nginx
#for deployment nginx server
#https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
since the admin.cnfig file is being automatically generated, it has allocated the server to https://172.17.0.2:6443 I want it to use '0.0.0.0:3000' instead. How can I do that?

What would be the reason to restrict a multi cluster only to one node? maybe you can give some more specific information and there is another workaround.

Related

Can we create more then 2 riak cluster

Can we setup Riak Cluster with only 2 nodes like this
node01
node02
or we add more cluster or less then 2 cluster if we can then please let me how we can achieve that .
It will depends if you are running your cluster under docker or not.
If you are under docker
You can in this case start a new riak node with the command
docker run -d -P -e COORDINATOR_NODE=172.17.0.3 --label cluster.name=<your main node name> basho/riak-kv
For more explaination about this line you can go on the basho post: Running Riak in Docker
If you are not under a docker container
As I didn't experimented this case personally I will only link the documentation to run a new node on a riak server: Running a Cluster
Hope I understood the question correctly

How to properly start nginx in Docker

I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

How to change the nginx process user of the official docker image nginx?

I'm using Docker Hub's official nginx image:
https://hub.docker.com/_/nginx/
The user of nginx (as defined in /etc/nginx/nginx.conf) is nginx. Is there a way to make nginx run as www-data without having to extend the docker image? The reason for this is, I have a shared volume, that is used by multiple containers - php-fpm that I'm running as www-data and nginx. The owner of the files/directories in the shared volume is www-data:www-data and nginx has trouble accessing that - errors similar to *1 stat() "/app/frontend/web/" failed (13: Permission denied)
I have a docker-compose.yml and run all my containers, including the nginx one with docker-compose up.
...
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./:/app
- ./vhost.conf:/etc/nginx/conf.d/vhost.conf
links:
- fpm
...
FYI
It is problem of php-fpm image
It is not about usernames, it is about www-data user ID
What to do
Fix your php-fpm container and don't break good nginx container.
Solutions
Here is mine post with solution for docker-compose (nginx +
php-fpm(alpine)): https://stackoverflow.com/a/36130772/1032085
Here is mine post with solution for php-fpm(debian) container:
https://stackoverflow.com/a/36642679/1032085
Solution for Official php-fpm image. Create Dockerfile:
FROM php:5.6-fpm
RUN usermod -u 1000 www-data
I know the OP asked for a solution that doesn't extend the nginx image, but I've landed here without that constraint. So I've made this Dockerfile to run nginx as www-data:www-data (33:33) :
FROM nginx:1.17
# Customization of the nginx user and group ids in the image. It's 101:101 in
# the base image. Here we use 33 which is the user id and group id for www-data
# on Ubuntu, Debian, etc.
ARG nginx_uid=33
ARG nginx_gid=33
# The worker processes in the nginx image run as the user nginx with group
# nginx. This is where we override their respective uid and guid to something
# else that lines up better with file permissions.
# The -o switch allows reusing an existing user id
RUN usermod -u $nginx_uid -o nginx && groupmod -g $nginx_gid -o nginx
It accepts a uid and gid on the command line during image build. To make an nginx image that runs as your current user id and group id for example:
docker build --build-arg nginx_uid=$(id -u) nginx_uid=$(id -g) .
The nginx user and group ids are currently hardcoded to 101:101 in the image.
Another option is to grab the source from https://github.com/nginxinc/docker-nginx and change the docker file to support build args Ex: of changing the stable Dockerfile of buser release (https://github.com/nginxinc/docker-nginx/blob/master/stable/buster/Dockerfile). to have the nginx user/group uid/gid set as build args
FROM debian:buster-slim
LABEL maintainer="NGINX Docker Maintainers <docker-maint#nginx.com>"
ENV NGINX_VERSION 1.18.0
ENV NJS_VERSION 0.4.3
ENV PKG_RELEASE 1~buster
#Change NGNIX guid/uid#
ARG nginx_guid=101
ARG nginx_uid=101
RUN set -x \
# create nginx user/group first, to be consistent throughout docker variants
&& addgroup --system --gid $nginx_guid nginx \
&& adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos "nginx user" --shell /bin/false --uid $nginx_uid nginx \
This way is safer than just doing usermod since if something is done in other places like
chown nginx:nginx it will use the GUID/UID set

How to setup Nginx as a load balancer using the StrongLoop Nginx Controller

I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken:
Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands:
$ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-nginx-controller
$ sudo sl-nginx-ctl-install -c 444
Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule.
Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server:
$ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-pm
$ sudo sl-pm-install
$ sudo /sbin/initctl start strong-pm
Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc:
$ cd /path/to/loopback-getting-started-intermediate # my application
$ slc arc
Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved."
Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.)
Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.)
How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this:
sudo sl-nginx-ctl-install -c http://0.0.0.0:444
Maybe this can solve your problem.

Resources