Spec:
.net core 2
Identity Server 4
On local dev machine when I visit by Postman http://127.0.0.1:5000/.well-known/openid-configuration i can find for examply "jwks_uri" with address http: //127.0.0.1:5000/.well-known/openid-configuration/jwks
I can visit http: //127.0.0.1:5000/.well-known/openid-configuration/jwks and see result like:
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "6ca39c3dd4ffda97d502243e25fa4e54",
"e": "AQAB",
"n": "sZthlS0HE1pkbSnMlPyKNDkAqkQryeKG7YSRMeUbrDQARu-9f11iUFUblAdXUhuFRu0R77AQ-mhjy7kfjQMOT58gp3aMa17HTKcMxZRZEi-zcXZuxVA7Q0nuWrWp4_-0VAMV4OhGromZCFtUb26kRJXyKMNlHSM2irSJ9LWnx6NtSkHMrC_kv3kpciZWLx__9DkVM7wmYuGz9DMezoz7-FuwcJcGJHmVz7RNRwGNhdcvEG8nJE3fl8QQ16CjOim2X845gaIc9dWKi1MAA_LS1M2EK4aU8FZjVqgQgY472zrwGtUtwz25aUEZu130fthZabvOiWTDbztuYtOmrxP7BQ",
"alg": "RS256"
}
]
}
Port 5000 is the most important thing
DEV MACHINE SCREENSHOT
On local dev machine when I visit by Postman http://192.168.168.13:81/.well-known/openid-configuration i can find for examply "jwks_uri" with address http: //192.168.168.13/.well-known/openid-configuration/jwks
I cannot visit http://192.168.168.13/.well-known/openid-configuration/jwks beacuse i recieve error 404:
nginx error!
The page you are looking for is not found.
There is no port 81
I can visit http: //192.168.168.13:81/.well-known/openid-configuration/jwks and see result like:
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "6ca39c3dd4ffda97d502243e25fa4e54",
"e": "AQAB",
"n": "sZthlS0HE1pkbSnMlPyKNDkAqkQryeKG7YSRMeUbrDQARu-9f11iUFUblAdXUhuFRu0R77AQ-mhjy7kfjQMOT58gp3aMa17HTKcMxZRZEi-zcXZuxVA7Q0nuWrWp4_-0VAMV4OhGromZCFtUb26kRJXyKMNlHSM2irSJ9LWnx6NtSkHMrC_kv3kpciZWLx__9DkVM7wmYuGz9DMezoz7-FuwcJcGJHmVz7RNRwGNhdcvEG8nJE3fl8QQ16CjOim2X845gaIc9dWKi1MAA_LS1M2EK4aU8FZjVqgQgY472zrwGtUtwz25aUEZu130fthZabvOiWTDbztuYtOmrxP7BQ",
"alg": "RS256"
}
]
}
SERVER MACHINE SCREENSHOT
This is my Centos firewall settings:
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: ssh dhcpv6-client
ports: 80/tcp 443/tcp 81/tcp 82/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
This is my nginx configuration for reverse proxy:
server {
listen 81;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
As far I have investigated the problem is that in .well-known/openid-configuration there is no proper ports (in that case 81) on server machine.
Endpoints exists, beacuse when I have manually added port 81 to endpoint, endpoint is avaiable.
Beacuse my application relly on .well-known/openid-configuration to autodiscover endpoints my authentication doesn't work. I don't know wether problem is on IdentityServer 4 configuration or on Centos configuration.
If anybody is facing the same issue, the following link will be helpful :)
http://amilspage.com/set-identityserver4-url-behind-loadbalancer/
Especially this part:
app.UseMiddleware<PublicFacingUrlMiddleware>
Related
I set up a flask/uwsgi and a nginx container.
First of all, I could not access the site having nginx listen on port 80 and defining the docker port 80:80. No idea why this would not work.
So I made nginx listen on port 8090 and opening port 80:8090 in docker:
nginx:
container_name: fa_nginx
build:
context: ./nginx
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 80:8090
networks:
- fa_nginx_net
I am not exposing any ports in the dockerfile, there I copy just the conf file.
## nginx conf
server {
listen 8090;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass web:8087;
}
Like that, I can access the site at http://localhost and browser around e.g. to http://localhost/faq.
However, when I submit a form (login), the nginx changes to another port and the redirected url looks like http://localhost:8090/auth/login.
the redirect in flask's login view after successful form validation is simply return redirect(url_for('main.profile')).
Here is the flask/uwsgi setup (shortened version without all then env etc):
web:
container_name: fa_web
build:
context: ./web
dockerfile: Dockerfile
expose:
- "8087"
networks:
- fa_nginx_net
[uwsgi]
plugin = python3
## python file where flask object app is defined.
wsgi-file = run.py
## The flask instance defined in run.py
callable = fapp
enable-threads = false
master = true
processes = 2
threads = 2
## socket on which uwsgi server should listen
protocol=http
socket = :8087
buffer-size=32768
## Gives permission to access the server.
chmod-socker = 660
vacuum = true
## various
die-on-term = true
## run.py
from app import create_app, db
import os
## init flask app
fapp = create_app()
if __name__ == "__main__":
if os.environ.get('FLASK_ENV') == 'development':
fapp.run(use_reloader=True, debug=True, host='0.0.0.0', port=8086)
else:
fapp.run(host='0.0.0.0', port=8087)
I have no idea why this is happening and how to solve this.
Ok, it is working now...
Basically I removed plugin=python3 and protocol=http from uwsgi.ini.
My setup looks now like this
## nginx default.conf
server {
listen 8090;
#server_name fa_web;
## To map the server block to an ip:
#server_name 1.2.3.4;
## To map the server block to a domain:
#server_name example.com www.example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
include /etc/nginx/uwsgi_params;
uwsgi_pass web:8087;
}
}
## uwsgi.ini
[uwsgi]
## python file where flask object app is defined.
wsgi-file = run.py
## The flask instance defined in myapp.py
callable = fapp
enable-threads = false
master = true
processes = 2
threads = 2
## socket on which uwsgi server should listen
socket = :8087
buffer-size=32768
#post-buffering=8192 ## workaround for consuming POST requests
## Gives permission to access the server.
chmod-socker = 660
vacuum = true
## various
die-on-term = true
The added headers in nginx conf didn't make a difference, just added them going forward configuring everything correctly.
I have the following nginx.config file:
events {}
http {
# ...
# application version 1a
upstream version_1a {
server localhost:8090;
}
# application version 1b
upstream version_1b {
server localhost:8091;
}
split_clients "${arg_token}" $appversion {
50% version_1a;
50% version_1b;
}
server {
# ...
listen 7080;
location / {
proxy_set_header Host $host;
proxy_pass http://$appversion;
}
}
}
I have two nodejs servers listening on port 8090 and 8091 and I am hitting the URL http://localhost:7080, my expectation here is the Nginx will randomly split the traffic to version_1a and version_1b upstream, but, all the traffic is going to version_1a. Any insight into why this might be happening?
(I want to have this configuration for the canary traffic)
Validate the variable you are using to split the traffic is set correctly, and the variable's value should be uniformly distributed else the traffic will not be split evenly.
We have a problem with our application, Nginx and keycloak. There are 3 instances: instance 1 APP, Instance 2 NGINX (reverse proxy) and Instance 3 Keycloak.
When a user logs in, he creates the session in keycloak but when he returns to the SiAe application it is when he returns 403.
We enter the keycloak administration console and see that the session was successful and is open. But it is not possible that the return to the application works.
Logs
Nginx:
"GET /opensat/?state=dcc1c40f-3183-4c7b-8342-f7df620cf0b3&session_state=605ba79a-ee05-4918-a96d-71466e31210a&code=fe066a17-a97a-495c-94f9-b1e5e3d6ac1f.605ba79a-ee05-4918-a96d-71466e31210a.f91920a4-3267-4de5-9788-24093a32c217 HTTP/1.1" **403 405** "https://mydomain/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0"
APP:
ERROR org.keycloak.adapters.OAuthRequestAuthenticator - failed to turn code into token
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
at org.apache.http.conn.ssl.SSLSocketFactory.createLayeredSocket(SSLSocketFactory.java:573)
at org.keycloak.adapters.SniSSLSocketFactory.createLayeredSocket(SniSSLSocketFactory.java:114)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:557)
at org.keycloak.adapters.SniSSLSocketFactory.connectSocket(SniSSLSocketFactory.java:109)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:414)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
Keycloak:
**INFO** [org.keycloak.storage.ldap.LDAPIdentityStoreRegistry] (default task-1) Creating new LDAP Store for the LDAP storage provider: 'ldap_pre', LDAP Configuration: {pagination=[true], fullSyncPeriod=[-1], usersDn=[ou=usuarios,dc=domain,dc=es], connectionPooling=[true], cachePolicy=[DEFAULT], useKerberosForPasswordAuthentication=[false], importEnabled=[true], enabled=[true], changedSyncPeriod=[86400], bindDn=[cn=admin,dc=domain,dc=es], usernameLDAPAttribute=[uid], lastSync=[1575269470], vendor=[other], uuidLDAPAttribute=[entryUUID], connectionUrl=[ldap://MIIP:389], allowKerberosAuthentication=[false], syncRegistrations=[false], authType=[simple], debug=[false], searchScope=[1], useTruststoreSpi=[ldapsOnly], priority=[0], userObjectClasses=[inetOrgPerson, organizationalPerson, person], rdnLDAPAttribute=[cn], editMode=[WRITABLE], validatePasswordPolicy=[false], batchSizeForSync=[1000]}, binaryAttributes: []
Configurations:
Nginx:
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://IP_KEYCLOAK:8081;
}
Keycloak:
....
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="proxyhttps" proxy-address-forwarding="true" enable-http2="true"/>
......
<socket-binding name="proxy-https" port="443"/>
.....
APP .json:
{
"realm": "domain",
"auth-server-url": "https://domainkeycloak/",
"ssl-required": "none",
"resource": "sso",
"enable-cors" : true,
"credentials": {
"secret": "98236f9a-c8b1-488c-8b36-ace4f95b1aa6"
},
"confidential-port": 0,
"disable-trust-manager":true,
"allow-any-hostname" : true
}
Could anybody help us, please?
The error Connection reset means that the application cannot make request to Keycloak server. You should investigate the issue further by logging in to the application server and telnet keycloak domain with 443 port since you set auth-server-url to https://domainkeycloak/
The solution has been to separate keycloak with other Nginx instead of using one Nginx for APP and Keycloak. Now, we have 2 Nginx's and run OK keycloak with our APP.
I am trying to proxy_pass to an http Wordpress site that is set up in a docker container through an Amazon ecs instance. The client gets to the site through a test server we have set up (https://test.xxxxxxx.com). When a user goes to https://test.xxxxxxx.com, I want it to show https://test.xxxxxxx.com in the address bar, but bring up the page for my Wordpress site (http://xx.xxx.xxx.xxx on port 80).
I can get it to go to my Wordpress site, but it looks funny. I am getting a lot of mixed content errors because I'm trying to access http files via an https request. I understand what's happening, but I can't seem to fix it, even after trying all of the suggestions I could find online.
I have tried changing several settings in both the Nginx file in the sites-available folder and by changing settings in wp-config.php on my Wordpress site. Below is one thing I tried. Almost all the tutorials I found, and everything I tried, was a variation of this.
#Nginx file
server {
listen 443;
location / {
proxy_pass http://xx.xxx.xxx.xxx:80;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
#wp-config.php
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https')
$_SERVER['HTTPS'] = '1';
if (isset($_SERVER['HTTP_X_FORWARDED_HOST'])) {
$_SERVER[HTTP_HOST'] = $_SERVER[HTTP_X_FORWARDED_HOST'];
}
define( 'WP_HOME', 'http://xx.xxx.xxx.xxx');
define( 'WP_SITEURL', 'http://xx.xxx.xxx.xxx');
What I would like to happen is that when a user enters https://test.xxxxxxx.com in the address bar, my Wordpress site loads with the proper theme and all my images, but https://test.xxxxxxx.com still shows in the address bar.
I wanna sugguest you use HA-Proxy reverse proxy in ECS.
I tried nginx reverse proxy, but failed. And success with HA-Proxy.
It is more simple than nginx configuration.
First, use "links" option of Docker and setting "environment variables" (eg. LINK_APP, LINK_PORT).
Second, fill this "environment variables" into haproxy.cfg.
Also, I recommend you use "dynamic port mapping" to ALB. it makes more flexible works.
taskdef.json :
# taskdef.json
{
"executionRoleArn": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/<APP_NAME>_ecsTaskExecutionRole",
"containerDefinitions": [
{
"name": "<APP_NAME>-rp",
"image": "gnokoheat/ecs-reverse-proxy:latest",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"hostPort": 0,
"containerPort": 80,
"protocol": "tcp"
}
],
"links": [
"<APP_NAME>"
],
"environment": [
{
"name": "LINK_PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "LINK_APP",
"value": "<APP_NAME>"
}
]
},
{
"name": "<APP_NAME>",
"image": "<IMAGE_NAME>",
"essential": true,
"memoryReservation": <MEMORY_RESV>,
"portMappings": [
{
"protocol": "tcp",
"containerPort": <SERVICE_PORT>
}
],
"environment": [
{
"name": "PORT",
"value": "<SERVICE_PORT>"
},
{
"name": "APP_NAME",
"value": "<APP_NAME>"
}
]
}
],
"requiresCompatibilities": [
"EC2"
],
"networkMode": "bridge",
"family": "<APP_NAME>"
}
haproxy.cfg :
# haproxy.cfg
global
daemon
pidfile /var/run/haproxy.pid
defaults
log global
mode http
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind *:80
http-request set-header X-Forwarded-Host %[req.hdr(Host)]
compression algo gzip
compression type text/css text/javascript text/plain application/json application/xml
default_backend app
backend app
server static "${LINK_APP}":"${LINK_PORT}"
Dockerfile(haproxy) :
FROM haproxy:1.7
USER root
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
See :
Github : https://github.com/gnokoheat/ecs-reverse-proxy
Docker image : gnokoheat/ecs-reverse-proxy:latest
Backend service/nginx proxy starts responding 'Error: socket hang up' when there is increased count of requests. The setup is as follows.
OS: CentOS 6
Express JS service -> nginx as a proxy -> flask app run by Gunicorn
JS app sends multiple requests at the same time to the other service, when the request count exceed ~100 it starts to return error responses. If the count is lower everything works fine.
I have followed example configuration of nginx which is in Gunicorn documentation + increasing timeout limits + increasing nginx open files limit. I have also tried keepalive option but the issue still remains. Gunicorn doesn't show any errors.
nginx configuration fragment:
upstream app_server {
server 127.0.0.1:8000 fail_timeout=0;
keepalive 100;
}
server {
listen 5001;
client_max_body_size 4G;
keepalive_timeout 300;
root /path/to/app/current/public; # static files
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
# Timeouts
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
send_timeout 300;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
Error response received from proxy:
{ RequestError: Error: socket hang up
at new RequestError (/home/pm2deploy/apps/app-backend/source/node_modules/request-promise-core/lib/errors.js:14:15)
at Request.plumbing.callback (/home/pm2deploy/apps/app-backend/source/node_modules/request-promise-core/lib/plumbing.js:87:29)
at Request.RP$callback [as _callback] (/home/pm2deploy/apps/app-backend/source/node_modules/request-promise-core/lib/plumbing.js:46:31)
at self.callback (/home/pm2deploy/apps/app-backend/source/node_modules/request/request.js:185:22)
at Request.emit (events.js:160:13)
at Request.onRequestError (/home/pm2deploy/apps/app-backend/source/node_modules/request/request.js:881:8)
at ClientRequest.emit (events.js:160:13)
at Socket.socketOnEnd (_http_client.js:423:9)
at Socket.emit (events.js:165:20)
at endReadableNT (_stream_readable.js:1101:12)
at process._tickCallback (internal/process/next_tick.js:152:19)
name: 'RequestError',
message: 'Error: socket hang up',
cause: { Error: socket hang up
at createHangUpError (_http_client.js:330:15)
at Socket.socketOnEnd (_http_client.js:423:23)
at Socket.emit (events.js:165:20)
at endReadableNT (_stream_readable.js:1101:12)
at process._tickCallback (internal/process/next_tick.js:152:19) code: 'ECONNRESET' },
error: { Error: socket hang up
at createHangUpError (_http_client.js:330:15)
at Socket.socketOnEnd (_http_client.js:423:23)
at Socket.emit (events.js:165:20)
at endReadableNT (_stream_readable.js:1101:12)
at process._tickCallback (internal/process/next_tick.js:152:19) code: 'ECONNRESET' },
options:
{ method: 'PUT',
uri: 'http://localhost:5001/transformers/segmentAvg',
qs:
{ stdMultiplier: 2,
segmentLeft: 1509366682333,
segmentRight: 1509367401685 },
body: { index: [Array], values: [Array] },
headers: {},
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
response: undefined }
ADDED:
In the OS log was recorded following entry:
possible SYN flooding on port X. Sending cookies.
Kernel socket backlog reached the limit and dropped following requests.
Reason: Kernel dropping TCP connections due to LISTEN sockets buffer full in Red Hat Enterprise Linux
Increase kernel socket backlog limit
Check the current value:
# sysctl net.core.somaxconn
net.core.somaxconn = 128
Increase the value:
# sysctl -w net.core.somaxconn=2048
net.core.somaxconn = 2048
Confirm the change by viewing again:
# sysctl net.core.somaxconn
net.core.somaxconn = 2048
Persist the change:
echo "net.core.somaxconn = 2048" >> /etc/sysctl.conf
Increase application socket listen backlog
Configuration parameter of uWSGI
listen=1024
This solution was taken from https://access.redhat.com/solutions/30453