Redirecting in NGNIX in localhost - nginx

i want when i write 10.10.0.0 in browser, it takes me index page.but it doesnt
I tried
server {
listen 8080;
server_name 10.10.0.0;
return 301 http://localhost:8080/index.html;
}

I didn't have time to test, but try following
server {
listen 8080;
server_name 10.10.0.0;
location / {
return 301 http://localhost:8080/index.html;
}
}

Let us try to dissect it,
Does this work without docker?
If yes, Let us look at what ports are you exposing to outside world from your container
For example: in docker-compose you need to expose it like below,
NOTE: see "ports" NOT "expose", which tells: "For external world I am exposing 8080, from there I will route internally to port 80 in the container"
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
command: /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf
container_name: my_nginx_server
tty: true
expose:
- "80" #This is internal to container network
ports:
- "8080:80" #HOST:CONTAINER
if you are using command-line, then it should have "-p 8080:80" while running container
If it does not work without docker, check ngnix<-->uwsgi (or whatever) <-->your_app
settings.
Please share more info, dockerfile, docker-compose.yml

Related

why I am getting Read only file system error from Nginx?

Dear K8S community Team,
I am getting this error message from nginx when I deploy my application pod. My application an angular6 app is hosted inside an nginx server, which is deployed as a docker container inside EKS.
I have my application configured as a “read-only container filesystem”, but I am using “ephemeral mounted” volume of type “emptyDir” in combination with a read-only filesystem.
So I am not sure the reason of this following error:
2019/04/02 14:11:29 [emerg] 1#1: mkdir()
"/var/cache/nginx/client_temp" failed (30: Read-only file system)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (30:
Read-only file system)
My deployment.yaml is:
...
spec:
volumes:
- name: tmp-volume
emptyDir: {}
# Pod Security Context
securityContext:
fsGroup: 2000
containers:
- name: {{ .Chart.Name }}
volumeMounts:
- mountPath: /tmp
name: tmp-volume
image: "{{ .Values.image.name }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
securityContext:
readOnlyRootFilesystem: true
ports:
- name: http
containerPort: 80
protocol: TCP
...
nginx.conf is:
...
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Turn off the bloody buffering to temp files
proxy_buffering off;
sendfile off;
keepalive_timeout 120;
server_names_hash_bucket_size 128;
# These two should be the same or nginx will start writing
# large request bodies to temp files
client_body_buffer_size 10m;
client_max_body_size 10m;
...
Seems like your nginx is not running as root user.
Since release 1.12.1-r2, nginx daemon is being run as user 1001.
1.12.1-r2
The nginx container has been migrated to a non-root container approach. Previously the container run as root user and the nginx daemon was started as nginx user. From now own, both the container and the nginx daemon run as user 1001. As a consequence, the configuration files are writable by the user running the nginx process.
This is why you are unable to bind on port 80, it's necessary to use port > 1000.
You should use:
ports:
- '80:8080'
- '443:8443'
and edit the nginx.conf so it listens on port 8080:
server {
listen 0.0.0.0:8080;
...
Or run nginx as root:
command: [ "/bin/bash", "-c", "sudo nginx -g 'daemon off;'" ]
As already stated by Crou, the nginx image maintainers switched to a non-root-user-approach.
This has two implications:
Your nginx process might not be able to bind all network sockets.
Your nginx process might not be able to read all file system locations.
You can try to change the ports as described by Crou (nginx.conf and deployment.yaml). Even with the NET_BIND_SERVICE capability added to the container, this does not neccessarily mean that the nginx process gets this capability. You can try to add the capability with
$ sudo setcap 'cap_net_bind+p' $(which nginx)
as a RUN instruction in your Dockerfile.
However it is usually simpler to just change the listening port.
For the filesystem, please note that /var/cache/nginx/ is not mounted as a volume and thus belongs to the RootFS which is mounted as read only. The simplest way to solve this, is to add a second epheremal emptyDir for /var/cache/nginx/ in the volumes section. Please make sure, that the nginx user has the file system permissions to read and write this directory. This is usually already taken care of by the docker image maintainers as long as you stay with the default locations.
I recommend you to not switch back to running nginx as root as this might expose you to security vulnerabilities.

docker running nginx as proxy to another webserver (on pi)

I am kind of in over my head with my current small project.
(although it should not be that hard)
I am trying to run multiple webpages using docker on my Pi (for testing purposes) which should all be reachable using the PI's IP.
I currently run a minimL LIGHTTPD: (based on the resin/rpi-raspbian image)
docker run -d -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
(this server is reachable using the browser on pi and on other computers in the network)
For nginx I run another container with with a simple config
(starting with http://nginx.org/en/docs/beginners_guide.html),
containing a webpage and images to test the container config.
this container is reachable using <pi-ip>:80
then I tried to add a proxy to the locations:
(I played around so now there are 3 locations for the same redirect)
location /prox1/{
proxy_pass http://<pi-ip>:8080
}
location /prox2/{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://<pi-ip>:8080
}
location /prox3/{
fastcgi_pass <pi-ip>:8080;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string
}
Version 1&2 give a 404 (I tried adding a rewrite, but then I ´nginx redirected on itself due to the /prox1/ being cut).
Version 3 yields a timeout.
Now I am not sure if I still have to dig on the nginx side, or I have to add a connection on the docker side between the containers.
PS: the Pi is running ArchForArm (using Xfce as desktop) because I couldn't find docker-compose in the raspberian repository.
-- EDIT ---:
I currently start everything manually. (so no compose file)
the LIGHTTPD is started with:
docker run -d --name mylighttpd -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
if I understood it correctly it is now listening on the local network (in the range of <pi-ip>) port 8080, which represents the test web-servers port 80. (I have added ..name so it is easier to stop it.)
the nginx is started like:
docker run --name mynginx --rm -p <pi-ip>:80:80 -v <config>:/data <image name>
The 8080 was added in the expose in the Docker file.
I current think I misunderstood the connection for two clients on the same machine, and should add a Virtual network, I am currently trying to find some docks there.
PS: I am not using the already existing nginx-zeroconf from the repo because it tells me it cant read the installed docker version. (and the only example for using that with composer also needs another container which seems unavailable for my architecture.)
-- edit2 --:
For the simple proxy_pass the problem could be the URL.
I added a deeper folder "prox1" in the "www" folder, containing an index file, and that one is schown when i ask for the page.
It seems like <pi-ip>:80/prox1/
is redirected to <pi-ip>:8080/prox1/
but if I try rewrite it (inside "location /prox1/") it seems to first delete the prox1, and then decides it now is part of the original location.
<pi-ip>:80/
PS: I am aware that it might be a better design to place the system inside another connection than "bridge" and only expose the proxy, but i am trying to learn this stuff in small steps.
-- edit3 --:
Trying compose now, but it seems I have encounters another part I don't understand (why I wanted to get it work without compose first).
I try to follow http://docs.master.dockerproject.org/compose/compose-file/#ipv4-address-ipv6-address
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.238.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.238.3
Now I have to find out why i get "User specific IP address is supported only when connecting to networks with user configured subnets", I assume the main networks block creates a network called "backbone".
-- edit4 --:
It seems ip blocks have to be written different to all the docks I have seen, the correct form is:
...
networks:
backbone:
ipv4_address: 172.16.0.2/16
...
now I have to figure out how to drop the part of the URL, and I am good to go.
The core problem seems to have been missing nginx parameter proxy_redirect, i found rambling trough the docks, the current nginx.conf is:
(/data/www contains a index.html with a relative link to some images in /data/images)
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
root /data/www;
}
location /images/ {
root /data;
}
location /prox0/{
proxy_pass http://lighttpd:80;
proxy_redirect default;
proxy_buffering off;
}
}
}
manual starting on local Ip seems to work, but docker-compose is easyer:
(if compose is not used replace lighttpd:80 with the ip & port used for starting the server.)
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.0.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.0.3

docker-compose scale with sticky sessions

I have a webserver that requires websocket connection in production. I deploy it using docker-compose with nginx as proxy.
So my compose file look like this:
version: '2'
services:
app:
restart: always
nginx:
restart: always
ports:
- "80:80"
Now if I scale "app" service to multiple instances, docker-compose will perform round robin on each call to the internal dns "app".
Is there a way to tell docker-compose load balancer to apply sticky sessions?
Another solution - is there a way to solve it using nginx?
Possible solution that I don't like:
multiple definitions of app
version: '2'
services:
app1:
restart: always
app2:
restart: always
nginx:
restart: always
ports:
- "80:80"
(And then on nginx config file I can define sticky sessions between app1 and app2).
Best result I got from searching:
https://github.com/docker/dockercloud-haproxy
But this requires me to add another service (maybe replace nginx?) and the docs is pretty poor about sticky sessions there.
I wish docker would just allow configuring it with simple line in the compose file.
Thanks!
Take a look at jwilder/nginx-proxy. This image provides an nginx reverse proxy that listens for containers that define the VIRTUAL_HOST variable and automatically updates its configuration on container creation and removal. tpcwang's fork allows you to use the IP_HASH directive on a container level to enable sticky sessions.
Consider the following Compose file:
nginx:
image: tpcwang/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
app:
image: tutum/hello-world
environment:
- VIRTUAL_HOST=<your_ip_or_domain_name>
- USE_IP_HASH=1
Let's get it up and running and then scale app to three instances:
docker-compose up -d
docker-compose scale app=3
If you check the nginx configuration file you'll see something like this:
docker-compose exec nginx cat /etc/nginx/conf.d/default.conf
...
upstream 172.16.102.132 {
ip_hash;
# desktop_app_3
server 172.17.0.7:80;
# desktop_app_2
server 172.17.0.6:80;
# desktop_app_1
server 172.17.0.4:80;
}
server {
server_name 172.16.102.132;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://172.16.102.132;
}
}
The nginx container has automatically detected the three instances and has updated its configuration to route requests to all of them using sticky sessions.
If we try to access the app we can see that it always reports the same hostname on each refresh. If we remove the USE_IP_HASH environment variable we'll see that the hostname actually changes, this is, the nginx proxy is using round robin to balance our requests.

how to reach another container from a dockerised nginx

I have nginx in a docker container, and a nodejs webapp in another docker container.
The nodejs server is reachable from the host server on port 8080.
The nginx docker container is listening to port 80 (will do the certificate later, first this base must be working).
And now I want a subdomain to be forwarded to this 8080 nodejs app. lets say app1.example.com
From outside I can reach the app by the server ip (or hostname) and port 8080 but not on app1.example.com. And it does work on app1.example.com:8080 (I have opened up port 8080 on the host server).
I get a bad gateway nginx message when approaching the app1.example.com So I get in the first nginx container, but how do i get back to the host server to proxy pass it to the port 8080 of the host server (and not port 8080 of the nginx container). looking for the reverse EXPOSE syntax.
the main problem is, of course if I use the ip and port 127.0.0.1:8080 it will try on the nginx container....
So how do I let the nginx container route back to the host 127.0.0.1:8080?
I have tried 0.0.0.0 and defining an upstream, actually been googling a lot, and have tried a lot of configurations... but not yet found a working one....
Edit
Just found out, this command of docker might help:
sudo docker network inspect bridge
This shows the Ip address used inside the containers (in my case 172.17..0.2), but not sure this address stays the same every time the docker will restart... (e.g. server reboot)
Edit
Following alkaline answer I now have (but still not working):
my docker-compose.yml file:
version: "2"
services:
nginx:
container_name: nginx
image: nginx_img
build: ../docker-nginx-1/
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
and my nginx (skipped the include in the conf.d folder for simplicity):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream upsrv {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv;
}
}
}
edit 31-08-2016
this might be the problem, the name is not backbone, but called after the folder started the service from:
sudo docker network ls
out puts:
NETWORK ID NAME DRIVER SCOPE
1167c2b0ec31 bridge bridge local
d06ffaf26fe2 dockerservices1_backbone bridge local
5e4ec13d790a host host local
7d1f8c32f259 none null local
edit 01-09-2016
It might be caused by the way I have my nginx docker container setup?
this is the docker file I used:
############################################################
# Dockerfile to build Nginx Installed Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Maintaner Name
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the beginning of the configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute
# when creating a new container
CMD service nginx start
My final solution 1th sept. 2016
I used this compose file now:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ./nginx-configs:/etc/nginx/conf.d
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In the project folder, from which you run docker-compose up -d, I added a folder named nginx-configs. This folder will 'override' all the files in the nginx container named /etc/nginx/conf.d
Therefor I copied the default.cfg from the nginx container before I added this volume mount. using the command:
docker exec -t -i container_name /bin/bash
and than cat /etc/nginx/conf.d/default.conf
and added the same default.conf in the project folder with nginx configs.
Besides the default I added app1.conf with this content:
upstream upsrv1 {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv1;
}
}
This way, I can easily add a second app... third and so on.
So the basics is working now.
Here's a best practice. Only expose port 80 outside of the host. The nodejs app can be in a private network only accessible through nginx.
version: "2"
services:
nginx:
...
ports:
- "80:80"
networks:
- backbone
nodejs:
...
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In your nginx.conf file, the upstream servers can be listed as nodejs:8080. The docker daemon will resolve it to the correct internal ip.

Setting up Nginx Proxy in Docker using Ansible

I am attempting to setup an nginx container that serves as a proxy to another container I have setup. I would like to automate this setup as I need to deploy a similar setup across several servers. For this I am using Ansible.
Here is my nginx.conf:
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
proxy_pass http://192.168.1.14:9000;
}
}
}
Here is the relevant part of my Ansible YAML file:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
When I first run my playbook, nginx is running but is not bound to 8080 as seen here:
6a4f610e86d nginx "nginx -g 'daemon off" 35 minutes ago Up Less than a second 80/tcp, 443/tcp nginx
However, if I run the nginx container directly with:
docker run -d -v /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -p 8080:8080 nginx
nginx and my proxy runs as expected and is listening on 8080:
c3a46421045c nginx "nginx -g 'daemon off" 2 seconds ago Up 1 seconds 80/tcp, 443/tcp, 0.0.0.0:8080->8080/tcp determined_swanson
Any idea why it works one way but not the other?
Update
Per the guidance given in the selected answer, I updated my YAML file thusly:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
expose:
- 8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
First, you need to make sure your nginx image EXPOSE the port 8080, and you can specify directly in your ansible yaml file:
expose
(added in 1.5)
List of additional container ports to expose for port mappings or links. If the port is already exposed using EXPOSE in a Dockerfile, you don't need to expose it again.
Then, the only other difference I see when considering the Ansible docker module is that the port are inside double-quotes:
ports:
- "8080:9000"
Also, if you want to prexypass to another container in the same docker daemon, you might want to use a link instead of a fixed IP address.
links:
- "myredis:aliasedredis"
That way, your nginx.conf includes a fixed rule:
proxy_pass http://aliasedredis:9000;

Resources