Thats a very weird error, because i have tried configuring it multiple times, and still cant get my websocket work properly. And i dont think that problem is in clientside.
So I have 2 guesses:
I might improperly configured nginx.conf file
i have improperly configured something what is related with docker. for example entrypoint.sh
So far i have tried editing both files, and also tried different variations of configuring routs. I also think that it can be some dumb mistake, but i have spent a long time on this, so i really appreciate any help or advide
here is asgi.py:
import os
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')
from django.core.asgi import get_asgi_application
from channels.routing import ProtocolTypeRouter, URLRouter
from service import routing
from channels.auth import AuthMiddlewareStack
from django.core.asgi import get_asgi_application
application_asgi = ProtocolTypeRouter({
'http': get_asgi_application(),
'websocket':AuthMiddlewareStack(
URLRouter(
routing.websocket_urlpatterns
)
),
})
application = get_asgi_application()
routing.py:
from django.urls import re_path, path
from .consumers import EventConsumer
websocket_urlpatterns = [
path('^api/wsEvents/', EventConsumer.as_asgi())
]
my nginx.conf:
daemon off;
upstream django {
server django_gunicorn:8000;
}
upstream websocket {
server django_asgi:8080;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
http {
listen 8000;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /static/ {
autoindex on;
alias ./backend/service/static:/backend/static; #and here also was just /static?
}
location /api/wsEvents/ {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
docker-compose:
services:
django_asgi:
build:
context: .
command: daphne config.asgi:application --port 8080 --bind 0.0.0.0
volumes:
- .:/app/backend
environment:
- .env
links:
- db
- redis
depends_on:
- db
- redis
redis:
restart: always
image: redis
ports:
- 6379:6379
volumes:
- redisdata:/data
django_gunicorn:
volumes:
- static:/app/static ## here was just /static/ | also in default.conf same
env_file:
- .env
build:
context: .
ports:
- 8000:8000
links:
- redis
nginx:
build: ./nginx
volumes:
- static:/app/static/ # here was just static:/static/
depends_on:
- django_gunicorn
- django_asgi
ports:
- "80:80"
Any advice or help is appreciated
Related
I'm using Keycloak as SSO for Directus. They are located in same network.
version: '3'
services:
nginx:
image: nginx:latest
container_name: nginx
restart: unless-stopped
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
networks:
- directus_keycloak
depends_on:
- keycloak
- directus_service
postgres:
container_name: postgres
image: postgres:13.7-alpine
volumes:
- ./db:/var/lib/postgresql/data
networks:
- directus_keycloak
ports:
- ...
environment:
...
redis:
container_name: redis
image: redis:6
networks:
- directus_keycloak
directus_service:
container_name: directus_service
image: directus/directus:latest
ports:
- 8055:8055
volumes:
- ./uploads:/directus/uploads
- ./extensions:/directus/extensions
- ./snapshots:/directus/snapshots
networks:
- directus_keycloak
depends_on:
- redis
- postgres
- keycloak
env_file:
- ./.env
keycloak:
image: quay.io/keycloak/keycloak:legacy
environment:
DB_VENDOR: postgres
DB_ADDR: 'postgres'
DB_PORT: '5432'
DB_DATABASE: '...'
DB_USER: '...'
DB_PASSWORD: '...'
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: ...
PROXY_ADDRESS_FORWARDING: "true"
REDIRECT_SOCKET: "proxy-http"
KEYCLOAK_FRONTEND_URL: http://keycloak.localhost/auth
depends_on:
- postgres
networks:
- directus_keycloak
ports:
- "8080:8080"
networks:
directus_keycloak:
driver: bridge
I can access Directus and Keycloak using NGINX:
http {
upstream keycloak_backend {
least_conn;
server keycloak:8080;
}
upstream directus_backend {
least_conn;
server directus_service:8055;
}
server {
listen 80;
server_name keycloak.localhost;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
location / {
proxy_pass http://keycloak_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
server {
listen 80;
server_name api.localhost;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
location / {
proxy_pass http://directus_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
But when I try login into Directus admin panel using Keycloak as provider I get We are sorry... page not found.
There is .env file too
KEY='..'
SECRET='...'
DB_CLIENT='pg'
DB_HOST='postgres'
DB_PORT='5432'
DB_DATABASE='...'
DB_USER='...'
DB_PASSWORD='...'
CACHE_ENABLED=false
CACHE_STORE='redis'
CACHE_REDIS='redis://redis:6379'
ADMIN_EMAIL='admin#example.com'
ADMIN_PASSWORD='...'
AUTH_PROVIDERS="keycloak"
AUTH_KEYCLOAK_DRIVER="openid"
AUTH_KEYCLOAK_CLIENT_ID="..."
AUTH_KEYCLOAK_CLIENT_SECRET="..."
AUTH_KEYCLOAK_ISSUER_URL="http://keycloak:8080/auth/realms/.../.well-known/openid-configuration"
AUTH_KEYCLOAK_PROFILE_URL="http://keycloak:8080/auth/realms/.../.well-known/openid-configuration"
AUTH_KEYCLOAK_ALLOW_PUBLIC_REGISTRATION="true"
AUTH_KEYCLOAK_IDENTIFIER_KEY="email"
AUTH_KEYCLOAK_SCOPE="openid email"
I suggest there should be some way to set redirect url in keycloak interface. I found only setting validation of redirect url though.
Is there any solution?
It works. There was a problem with configuration of client inside keycloak realm, not with configuration above
I have two keycloak instances running on two separate swarm stacks.
this is how my stack file looks like:
INSTANCE 1
version: "3.4"
services:
# keycloak Server
keycloak:
image: jboss/keycloak:11.0.0
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
order: start-first
restart_policy:
condition: on-failure
environment:
# DB_STUFF
PROXY_ADDRESS_FORWARDING: "true"
ports:
- "18080:18080"
command:
- "-b"
- "0.0.0.0"
- "-Djboss.socket.binding.port-offset=10000"
INSTANCE 2
version: "3.4"
services:
# keycloak Server
keycloak:
image: jboss/keycloak:11.0.0
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
order: start-first
restart_policy:
condition: on-failure
environment:
# DB_STUFF
PROXY_ADDRESS_FORWARDING: "true"
ports:
- "18081:18081"
command:
- "-b"
- "0.0.0.0"
- "-Djboss.socket.binding.port-offset=10001"
And the nginx configuration:
location /auth/ {
proxy_pass http://localhost:18080/auth/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 80;
}
location /auth2/ {
proxy_pass http://localhost:18081/auth/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 80;
}
I wanted to be able to access each of them through a separate path, but when I try to access the admin console of the second instance at /auth2 it redirects me to the first one at /auth.
I have little knowledge about nginx so any help is appreciated.
You may want to change the web context on your second Keycloak instance to auth2.
Set an environment variable WEB_CONTEXT to auth2 on your second Keycloak instance. Then add a CLI script file web-context.cli like this:
set WEB_CONTEXT=${env.WEB_CONTEXT:auth}
set KEYCLOAK_CONFIG_FILE=${env.KEYCLOAK_CONFIG_FILE:standalone-ha.xml}
set JBOSS_HOME=${env.JBOSS_HOME}
echo Setting web-context to $WEB_CONTEXT in $JBOSS_HOME/standalone/configuration/$KEYCLOAK_CONFIG_FILE
embed-server --server-config=$KEYCLOAK_CONFIG_FILE --std-out=echo
/subsystem=keycloak-server/:write-attribute(name=web-context,value=$WEB_CONTEXT)
stop-embedded-server
Add the file to /opt/jboss/startup-scripts.
See "Runnin custom scripts on startup" section in the README for details.
I am trying to set up a homeserver with Jenkins and Jfrog Artifactory OSS using docker-compose and Nginx as a reverse proxy.
The docker host has "homeserver" as hostname and my goal is to reach the Jenkins over http://homeserver/jenkins and artifactory over http://homeserver/artifactory.
While setting this up for Jenkins was no problem, I can't get artifactory to run as I want it to.
My docker-compose.yml is as follows:
version: '3.8'
services:
reverseproxy:
image: nginx
container_name: homeserver-reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
jenkins:
image: jenkins/jenkins:lts
container_name: homeserver-jenkins
environment:
- JENKINS_OPTS="--prefix=/jenkins"
artifactory:
image: docker.bintray.io/jfrog/artifactory-oss:latest
container_name: homeserver-artifactory
volumes:
- artifactory-data:/var/opt/jfrog/artifactory
ports:
- "8081:8081"
- "8082:8082"
volumes:
artifactory-data:
I started off with the following nginx configuration:
events {}
http {
upstream jenkins {
server homeserver-jenkins:8080;
}
server {
server_name homeserver;
location /jenkins {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://jenkins;
proxy_read_timeout 90;
proxy_http_version 1.1;
proxy_request_buffering off;
}
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
location / {
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
chunked_transfer_encoding on;
client_max_body_size 0;
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://homeserver-artifactory:8082;
proxy_next_upstream error timeout non_idempotent;
proxy_next_upstream_tries 1;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://homeserver-artifactory:8081;
}
}
}
}
This lets me access Jenkins as intended over http://homeserver/jenkins.
Artifactory is running (and accessible) with this configuration, but only directly at /.
Note that the parts for artifactory are exactly what the Jfrog website suggests as configuration.
That is not what I want, though.
In order to switch artifactory to http://homeserver/artifactory I see two possible options:
Configure artifactory to include an URL prefix (like I configured Jenkins with the JENKINS_OPTS="--prefix=/jenkins". However I could not figure out how to do this. The artifactory system.yml lets me configure the ports, but not the URL. Setting the Base URL over the web interface does not seem to work either, it just affects the redirected URLs and generated links (precisely at is stated in the tooltip of the website).
Change the nginx configuration and rewrite the request URIs. I tried, but it just doesn't work as I hoped it would.
Here is what I tried for the second option:
events {}
http {
upstream jenkins {
server homeserver-jenkins:8080;
}
server {
server_name homeserver;
location /jenkins {
# rewrite ^/jenkins(.*)$ $1 break;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://jenkins;
proxy_read_timeout 90;
# proxy_redirect http://homeserver-jenkins:8080
proxy_http_version 1.1;
proxy_request_buffering off;
}
rewrite ^/jfrog/$ /jfrog/ui/ redirect;
rewrite ^/jfrog/ui$ /jfrog/ui/ redirect;
location /jfrog/ {
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/jfrog(.*)$ $1 break;
chunked_transfer_encoding on;
client_max_body_size 0;
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://homeserver-artifactory:8082;
proxy_next_upstream error timeout non_idempotent;
proxy_next_upstream_tries 1;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://homeserver-artifactory:8081;
}
}
}
}
I already tried here to change the location from http://homeserver/artifactory to http://homeserver/jfrog to avoid clashes with the nested location block - don't know if that's important or not.
When trying to run this configuration, I get a lot of error messages from my reverseproxy like
"GET /ui/css/chunk-vendors.4485dbea.css HTTP/1.1" 404 154
and the Jfrog logo bitmap in the loading screen does not show, and neither does the website itself load correctly.
After checking the network traffic with firefox it seems to me that the problem comes from the <link/> elements in the HTML document which is loaded when visiting http://homeserver/jfrog/ui/.
For example the CSS which led to the error message above comes from:
<link href="/ui/css/chunk-vendors.4485dbea.css" rel="preload" as="style">
Firefox then tries to load http://homeserver/ui/css/chunk-vendors.4485dbea.css. This does not work, the correct URL would be http://homeserver/jfrog/ui/css/chunk-vendors.4485dbea.css.
I want to print the request_body and response_body from NGINX.
I try to implement a few of the solution that I have learned from here
But it was not working in my case. Is there some additional changes that I need to configure in my NGINX.conf file.
Here is my conf file.
worker_processes 4;
events { worker_connections 1024; }
http {
sendfile on;
upstream consumer-portal {
server xx.xx.xx.xx:9006;
}
upstream download-zip-service {
server xx.xx.xx.xx:9012;
}
server {
listen 8765;
location / {
proxy_pass http://download-zip-service/;
proxy_redirect off;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Host $server_name;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
#socket timeout setting added
fastcgi_read_timeout 7200s;
send_timeout 7200s;
proxy_connect_timeout 7200s;
proxy_send_timeout 7200s;
proxy_read_timeout 7200s;
#new property added
proxy_request_buffering off;
proxy_buffering off;
}
location /consumer-portal/ {
proxy_pass http://consumer-portal/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Below is the docker-compose.yml
version: '3'
services:
nginx:
restart: always
build: ../../conf/sandbox/
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8765:8765"
networks:
- cloud
networks:
cloud:
driver: bridge
Please let me know what changes I need to configure.
Thanks In Advance.
Can anybody provide me a complete example about how running insecure (without TLS) ingress controller and resource with nginx to have remote access to services running inside kubernetes cluster ? i did not find something useful.
PS: my kubernetes cluster is running on bare metal, not on a cloud provider.
the next may be useful information about what i did:
$kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
attachmentservice 10.254.111.232 <none> 80/TCP 3d
financeservice 10.254.38.228 <none> 80/TCP 3d
gatewayservice 10.254.38.182 nodes 80/TCP 3d
hrservice 10.254.61.196 <none> 80/TCP 3d
kubernetes 10.254.0.1 <none> 443/TCP 31d
messageservice 10.254.149.125 <none> 80/TCP 3d
redis-service 10.254.201.241 <none> 6379/TCP 15d
settingservice 10.254.157.155 <none> 80/TCP 3d
trainingservice 10.254.166.92 <none> 80/TCP 3d
nginx-ingress-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.6.0
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
services-ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: services-ingress
spec:
rules:
- host: ctc-cicd2
http:
paths:
- path: /gateway
backend:
serviceName: gatewayservice
servicePort: 80
- path: /training
backend:
serviceName: trainingservice
servicePort: 80
- path: /attachment
backend:
serviceName: attachmentservice
servicePort: 80
- path: /hr
backend:
serviceName: hrservice
servicePort: 80
- path: /message
backend:
serviceName: messageservice
servicePort: 80
- path: /settings
backend:
serviceName: settingservice
servicePort: 80
- path: /finance
backend:
serviceName: financeservice
servicePort: 80
nginx.conf new content
upstream default-services-ingress-ctc-cicd2-trainingservice {
server 12.16.64.5:8190;
server 12.16.65.6:8190;
}
upstream default-services-ingress-ctc-cicd2-attachmentservice {
server 12.16.64.2:8095;
}
upstream default-services-ingress-ctc-cicd2-hrservice {
server 12.16.64.7:8077;
}
upstream default-services-ingress-ctc-cicd2-messageservice {
server 12.16.64.9:8065;
}
upstream default-services-ingress-ctc-cicd2-settingservice {
server 12.16.64.10:8098;
server 12.16.65.4:8098;
}
upstream default-services-ingress-ctc-cicd2-financeservice {
server 12.16.64.4:8092;
}
upstream default-services-ingress-ctc-cicd2-gatewayservice {
server 12.16.64.6:8090;
server 12.16.65.7:8090;
}`
server {
listen 80;
server_name ctc-cicd2;
location /gateway {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-gatewayservice;
}
location /training {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-trainingservice;
}
location /attachment {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-attachmentservice;
}
location /hr {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-hrservice;
}
location /message {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-messageservice;
}
location /settings {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-settingservice;
}
location /finance {
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
client_max_body_size 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering on;
proxy_pass http://default-services-ingress-ctc-cicd2-financeservice;
}
}
According to the Kubernetes ingress documentation, Ingress is a collection of rules that allow inbound connections to reach the cluster services. This, of course requires that you have an ingress controller deployed in your cluster. While there are many many ways you can implement an ingress controller, a simple one that will help you understand the concept can be found here. This one is written in golang and basically listens to the kubeapi for new ingress resources. When it gets a new incoming ingress resource, it will recreate a new nginx conf based off that config and reload the nginx container that makes up your ingress controller:
const (
nginxConf = `
events {
worker_connections 1024;
}
http {
# http://nginx.org/en/docs/http/ngx_http_core_module.html
types_hash_max_size 2048;
server_names_hash_max_size 512;
server_names_hash_bucket_size 64;
{{range $ing := .Items}}
{{range $rule := $ing.Spec.Rules}}
server {
listen 80;
server_name {{$rule.Host}};
{{ range $path := $rule.HTTP.Paths }}
location {{$path.Path}} {
proxy_set_header Host $host;
proxy_pass http://{{$path.Backend.ServiceName}}.{{$ing.Namespace}}.svc.cluster.local:{{$path.Backend.ServicePort}};
}{{end}}
}{{end}}{{end}}
}`
)
What this allows for is one single entry point into your cluster that proxy traffic to all of the services inside of your Kubernetes cluster.
Say you have a service named foo inside the namespace bar. Kube-DNS allows us to reach that service from inside a kubernetes cluster form the DNS address foo.bar.svc.cluster.local. This is basically what Ingress does for us. We specify a path in which we want to use to reach the service and then the ingress controller proxies that path to the service foo in your cluster.