Cookie has been rejected because it is in a cross-site context (Nginx, Nextjs, Rocket) - nginx

My development workflow is as follow:
I access the frontend via https://client.my-project:3001. Nginx proxies it to nextjs's local server at localhost:3000. In a react component there's a function handlesubmit in which fetch sends a post request to the backend. I use Nginx as a reverse proxy between nextjs and the backend. The backend's response contains a cookie configured with secure, HttpOnly, same site strict. The cookie is rejected in the browser with the following message cookie “id” has been rejected because it is in a cross-site context and its “samesite” is “lax” or “strict”.
Here is the conf file for nginx:
upstream api {
server localhost:8000;
}
upstream client {
server localhost:3000;
}
server {
listen 3001 ssl;
server_name client.my-project;
ssl_certificate /client/client.my-project.pem;
ssl_certificate_key /client/client.my-project.pem;
location / {
proxy_pass http://client;
}
location /_next/webpack-hmr {
proxy_pass http://client/_next/webpack-hmr;
proxy_http_version 1.1;
proxy_set_header upgrade $http_upgrade;
proxy_set_header connection "upgrade";
}
}
server {
if ($http_origin = https://client.my-project:3001) {
set $allowed_origin 'https://client.my-project:3001';
}
listen 4545 ssl;
server_name api.my-project;
ssl_certificate /api/api.my-project.pem;
ssl_certificate_key /api/api.my-project-key.pem;
location / {
add_header 'access-control-allow-origin' $allowed_origin always;
add_header 'access-control-allow-credentials' 'true' always;
add_header 'access-control-allow-headers' 'authorization,accept,origin,dnt,x-customheader,keep-alive,user-agent,
x-requested-with,if-modified-since,cache-control,content-type,content-range,range';
add_header 'access-control-allow-methods' 'get,post,options,put,delete,patch';
proxy_pass https://api;
}
}
Here is how I use fetch to send a POST request to the backend.
const response = await fetch(`${process.env.NEXT_PUBLIC_API}/user`, {
mode: 'cors',
method: 'POST',
body: formData,
credentials: 'include'
});
And here is how the cookie is created in the backend:
let cookie = Cookie::build("id", session_id)
.secure(true)
.http_only(true)
.same_site(SameSite::Strict)
.finish();
I want to avoid setting the cookie to same site none.

Related

Nginx TLS Passthrough (111: Connection refused) while connecting to upstream

What I am trying do is to configure NGINX to forward https requests to corresponding containers (by hostnames) running on the same machine with TLS passthrough so TLS termination will be done at the containers. As right now, I only have bw.domain.com
Here is my nginx config that I try to config:
stream {
map $ssl_preread_server_name $name {
bw.domain.com bw;
}
upstream bw {
server 127.0.0.1:4443;
}
server {
listen 443;
proxy_pass $name;
ssl_preread on;
}
}
Here is the nginx config generated by self-host bitwarden (upstream):
#######################################################################
# WARNING: This file is generated. Do not make changes to this file. #
# They will be overwritten on update. You can manage various settings #
# used in this file from the ./bwdata/config.yml file for your #
# installation. #
#######################################################################
server {
listen 8080 default_server;
listen [::]:8080 default_server;
server_name bw.domain.com;
return 301 https://bw.domain.com$request_uri;
}
server {
listen 8443 ssl http2;
listen [::]:8443 ssl http2;
server_name bw.domain.com;
ssl_certificate /etc/ssl/fullchain.pem;
ssl_certificate_key /etc/ssl/privkey.pem;
ssl_session_timeout 30m;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets off;
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_protocols TLSv1.2;
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256";
# Enables server-side protection from BEAST attacks
ssl_prefer_server_ciphers on;
# OCSP Stapling ---
# Fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
# Verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/ssl/fullchain.pem;
resolver 1.1.1.1 1.0.0.1 9.9.9.9 149.112.112.112 valid=300s;
include /etc/nginx/security-headers-ssl.conf;
include /etc/nginx/security-headers.conf;
location / {
proxy_pass http://web:5000/;
include /etc/nginx/security-headers-ssl.conf;
include /etc/nginx/security-headers.conf;
add_header Content-Security-Policy "default-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https://haveibeenpwned.com https://www.gravatar.com; child-src 'self' https://*.duosecurity.com https://*.duofederal.com; frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; connect-src 'self' wss://bw.domain.com https://api.pwnedpasswords.com https://2fa.directory; object-src 'self' blob:;";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Robots-Tag "noindex, nofollow";
}
location /alive {
return 200 'alive';
add_header Content-Type text/plain;
}
location = /app-id.json {
proxy_pass http://web:5000/app-id.json;
include /etc/nginx/security-headers-ssl.conf;
include /etc/nginx/security-headers.conf;
proxy_hide_header Content-Type;
add_header Content-Type $fido_content_type;
}
location = /duo-connector.html {
proxy_pass http://web:5000/duo-connector.html;
}
location = /webauthn-connector.html {
proxy_pass http://web:5000/webauthn-connector.html;
}
location = /webauthn-fallback-connector.html {
proxy_pass http://web:5000/webauthn-fallback-connector.html;
}
location = /sso-connector.html {
proxy_pass http://web:5000/sso-connector.html;
}
location /attachments/ {
proxy_pass http://attachments:5000/;
}
location /api/ {
proxy_pass http://api:5000/;
}
location /icons/ {
proxy_pass http://icons:5000/;
}
location /notifications/ {
proxy_pass http://notifications:5000/;
}
location /notifications/hub {
proxy_pass http://notifications:5000/hub;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
}
location /events/ {
proxy_pass http://events:5000/;
}
location /sso {
proxy_pass http://sso:5000;
include /etc/nginx/security-headers-ssl.conf;
include /etc/nginx/security-headers.conf;
add_header X-Frame-Options SAMEORIGIN;
}
location /identity {
proxy_pass http://identity:5000;
include /etc/nginx/security-headers-ssl.conf;
include /etc/nginx/security-headers.conf;
add_header X-Frame-Options SAMEORIGIN;
}
location /admin {
proxy_pass http://admin:5000;
include /etc/nginx/security-headers-ssl.conf;
include /etc/nginx/security-headers.conf;
add_header X-Frame-Options SAMEORIGIN;
}
}
Right now, it throws this error when I use Firefox:
Secure Connection Failed
An error occurred during a connection to bw.domain.com.
PR_END_OF_FILE_ERROR
Error code: PR_END_OF_FILE_ERROR
Here is the logs of the NGINX
2023/01/07 23:09:37 [error] 28#28: *1 connect() failed (111:
Connection refused) while connecting to upstream, client: 172.16.1.1,
server: 0.0.0.0:443, upstream: "127.0.0.1:4443", bytes from/to
client:0/0, bytes from/to upstream:0/0

What is wrong with my FastAPI Reverse Proxy?

I have a web application which is setup in the following way: the frontend is being served by nginx. The backend is handled by FastAPI. I setup a rule in nginx to proxy all requests with url /api to the backend directly. Now, I also proxy to Grafana. The way I did this was to build a reverse proxy within my FastAPI server. So in nginx, I have a rule to proxy all requests with url /grafana to FastAPI. FastAPI then does some user authentication before proxying to the grafana server. The exception is that any Grafana websocket connection gets proxied directly from nginx to the grafana server.
Here's my nginx conf file
server {
server_name example.com www.example.com;
root /var/www/web-app/html;
index index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ =404;
}
location /api {
proxy_pass http://localhost:5000;
}
# grafana reverse proxy
location /grafana {
proxy_set_header Origin http://localhost:3000;
# proxy_set_header Origin https://example.com;
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $http_origin;
proxy_pass http://localhost:5000;
}
location /grafana/api/live {
rewrite ^/(.*) /$1 break;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_pass http://localhost:3030;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name example.com www.example.com;
return 404; # managed by Certbot
}
Here's my FastAPI proxy
async def _reverse_proxy(request: Request):
url = httpx.URL(path=request.url.path,
query=request.url.query.encode("utf-8"))
new_header = MutableHeaders(request.headers)
if "grafana_session" not in request.cookies:
if "authorization" not in request.headers:
return RedirectResponse("/")
jwt_token = request.headers["authorization"].replace('Bearer ', '')
# user authentication stuff...
current_user = get_current_user_from_token(token=jwt_token, db=some_session)
org = get_org_by_id(current_user.org_id, some_session)
if org is None:
raise HTTPException(
status_code=404,
detail="Organization not found",
)
del new_header['authorization']
new_header['X-WEBAUTH-USER'] = current_user.username
new_header['X-Grafana-Org-Id'] = f"{org.grafana_org_id}"
if "authorization" in new_header:
del new_header['authorization']
rp_req = client.build_request(request.method, url,
headers=new_header.raw,
content=await request.body())
rp_resp = await client.send(rp_req, stream=True)
return StreamingResponse(
rp_resp.aiter_raw(),
status_code=rp_resp.status_code,
headers=rp_resp.headers,
background=BackgroundTask(rp_resp.aclose),
)
app = FastAPI(title=settings.PROJECT_NAME, version=settings.PROJECT_VERSION)
origins = ["http://localhost:3000", "https://example.com"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(api_router, prefix=settings.API_V1_STR)
app.add_route("/grafana/{path:path}", _reverse_proxy, ["GET", "POST", "DELETE"])
If I run this locally on localhost:3000 (with some modifications to nginx to be listening on 3000 instead of 443 ssl), everything works perfectly. If I try in production at example.com, I have to include this hacky bit proxy_set_header Origin http://localhost:3000; to change the origin in a request to my FastAPI proxy to make it work. Why is this happening?
NOTE: I've checked the grafana server logs to make sure that it wasn't the issue. None of the requests make it past the FastAPI server, it is the one returning error 403, origin not allowed.

How to trigger nginx-rtmp pull from the ingest server on client requests a .m3u8 video through load balancer

I have a ingest nginx-rtmp server which act as a single source of rtmp streams for worker nodes.
Ingest instance gets the rtmp through OBS. Now when client request a video through load balancer like https://load-balancer.com/hls/test.m3u8
Load balancer sends this request to one of the worker nodes.
The worker nodes has following nginx config file.
worker_processes auto;
events {
worker_connections 1024;
}
# RTMP configuration
rtmp {
server {
listen 1935; # Listen on standard RTMP port
chunk_size 4000;
# Define the Application
application show {
live on;
pull rtmp://localhost:1935/stream/ live=1;
# Turn on HLS
hls on;
hls_path /mnt/hls/;
hls_fragment 3;
hls_playlist_length 60;
# disable consuming the stream from nginx as rtmp
deny play all;
}
}
}
http {
sendfile off;
tcp_nopush on;
aio on;
directio 512;
default_type application/octet-stream;
server {
listen 8080;
location / {
# Disable cache
add_header 'Cache-Control' 'no-cache';
# CORS setup
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length';
# allow CORS preflight requests
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
types {
application/dash+xml mpd;
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /mnt/;
}
}
}
The nginx-rtmp pull of this worker node will not work until anyone request for it. Hence it will not generate the hls file for the http request.
Note: I can not push the stream to the worker nodes as the worker noders are being auto scalled and does not have a fix number.
I wasn to know a solution by which, Whenever a client request a video which and the worker node gets the https request, Some how the rtmp server application show gets a hit and pull the video from the ingest server and generates the hls block and the http request can be fulfilled.

Problem connecting web3 via websocket to node with nginx

We are trying to connect to a geth node via WebSockets to be able to subscribe to contract events. This node is created with docker and this docker uses nginx as a proxy.
We can connect easily with HTTP (not WS rpc), but we cannot subscribe to contract events with HTTP.
We have managed to establish the connection in a local instance of this docker image with a node.js websocket server with this same nginx proxy. But we cannot connect with web3 v1.0.0-beta55.
The 2 different error we get are 403 (Forbidden) with this nginx config (which is the one that works with non web3 web sockets):
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
or error 400 (bad request) with this other configuration:
location /rpcws {
proxy_pass http://localhost:22001;
}
On the client side we get either
Error during WebSocket handshake: Unexpected response code: 400||403
or
connection not open on send()
Right now we are tracking possible port issues, in a local instance of this docker image, but we have configured several times the node to receive this ws connection through the already working http rpc port (Obviously changing all the geth and nginx config to receive wsrpc and not http rpc) and we get the same error codes.
Our main guess is that nginx is not proxy_passing correctly the WebSocket request. We asked the quorum network technical team and they have never tried to establish a WebSocket connection, and they don't know how to help us any further.
All the code is listed below.
Solidity smart contract:
pragma solidity 0.4.18;
contract EventTest {
string fName;
uint age;
event doSetInstructor();
event instructorSetted(string name, uint age);
function askForSetInstructor() public {
doSetInstructor();
}
function setInstructor(string _fName, uint _age) public {
fName = _fName;
age = _age;
instructorSetted(fName, age);
}
function getInstructor() public constant returns (string, uint) {
return (fName, age);
}
}
Web3 connection:
var Web3 = require('web3');
var TruffleContract = require('truffle-contract');
var eventTestABI = require('./abi/EventTest.json');
var io = require('socket.io-client');
var web3 = new Web3(new Web3.providers.WebsocketProvider('ws://9.43.80.817/rpcws'));
var contractAddress;
web3.eth.defaultAccount = '0x41E4e56603bF37a03Bb5Asa635787b3068052b82';
let truffleContract = TruffleContract(eventTestABI);
contractAddress = '0x82ce1df01f2a8bcadfad485eaa785424123734f7';
let contract = new web3.eth.Contract(eventTestABI.abi, contractAddress, {
from: '0x41E4e56603bF37a03Bb5Asa635787b3068052b82',
gas: 20000000,
gasPrice: 0,
data: truffleContract.deployedBytecode
});
web3.eth.subscribe('logs', {
address: contract.options.address,
topics: [contract.events.doSetInstructor().signature]
}, (error, result) => {
if (!error) {
console.log("Event triggered");
const eventObj = web3.eth.abi.decodeLog(
eventJsonInterface.inputs,
result.data,
result.topics.slice(1)
)
console.log("New event!", eventObj)
console.log(eventObj);
}else{
console.log("Error watching event", error);
}
});
The geth set up:
--networkid $NETID --identity $IDENTITY --permissioned --ws --wsaddr 0.0.0.0
--wsport 22001 --wsapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,
istanbul --wsorigins '*' --rpc --rpcaddr $RPCADDR --rpcapi admin,db,eth,debug,miner,
net,shh,txpool,personal,web3,quorum,istanbul --rpccorsdomain '*' --rpcport 22000
--port 21000 --istanbul.requesttimeout 10000 --ethstats $IDENTITY --verbosity 3 --vmdebug --emitcheckpoints --targetgaslimit 18446744073709551615 --syncmode full --gcmode $GCMODE --vmodule consensus/istanbul/core/core.go=5 --nodiscover
The nginx conf file:
limit_req_zone $binary_remote_addr zone=one:10m rate=999999999999999999r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;
client_body_buffer_size 128k;
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /var/www/html;
access_log /var/log/nginx/access_log combined;
error_log /var/log/nginx/error.log warn;
index index.html index.htm index.nginx-debian.html;
#ssl_certificate /etc/ssl/nginx/alastria-test.crt;
#ssl_certificate_key /etc/ssl/nginx/alastria-test.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_body_timeout 30s;
client_header_timeout 30s;
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Access-Control-Allow-Origin' "http://someurl.com";
location / {
# First attempt to serve request as file, then as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
location /rpc {
# Request rate and number of connections limitation
limit_req zone=one burst=30 nodelay;
limit_conn addr 10;
# Whitelist/Blacklist
include ./conf.d/blacklist;
content_by_lua_block {
ngx.req.read_body()
local data = ngx.req.get_body_data()
if data then
if not (string.match(data,"eth_") or string.match(data,"net_") or string.match(data,"web3_") or string.match(data, "personal_")) then
ngx.exit(403)
else
ngx.exec("#rpc_proxy")
end
end
}
}
location #rpc_proxy {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://localhost:22000;
}
}

Nginx + HapiJS CORS request (not GET)

I have a node server with a hapiJS API, and am attempting to send it requests from a separate Node server.
Everything works fine in development, but not on the servers, and the issue is basic auth is enabled on the servers.
Specifically, the requests work fine if they are GET requests, but all other requests fail. I suspect this is due to the OPTIONS pre-request flight check failing, which isn't sent on GET requests from my limited understanding.
The exact error message I get is:
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://site.example.com' is therefore not allowed access. The response had HTTP status code 401.
My nginx config:
server {
listen 80;
server_name https://api.example.com;
return 301 https://$server_name$request_uri; ## Permanently redirect all http traffic to https
}
server {
#SSL INIT
listen 443;
ssl on;
ssl_certificate /var/vcap/jobs/nginx/config/site.bundle.crt;
ssl_certificate_key /var/vcap/jobs/nginx/config/site.key;
server_name https://api.example.com;
location / {
auth_basic "Restricted Content";
auth_basic_user_file .htpasswd;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass_request_headers on;
proxy_pass https://api.example.com;
}
}
And my CORS settings for HapiJS (I excluded the irrelevant stuff).
connections:
routes:
cors:
origin: ["https://api.example.com"]
credentials: true
I have tried the https://www.npmjs.com/package/hapi-cors-headers hapi-cors-headers plugin, to no avail (instead of the above settings.) I tried enabling every single CORS thing on nginx I could think of (most of them gotten from http://enable-cors.org/server_nginx.html)
One of two things happens, no matter how I've adjusted the configuration - and I've tried a LOT of things:
1) It continues to give the above message no matter what
2) It complains about the heading being there TWICE (if I put it in both nginx and hapijs at the same time).
In no situation does it work (except GET requests).
An example of a POST ajax call to the API that I'm using (used with Kendo):
$.ajax({
url: api_address + '/vendors',
method: 'POST',
contentType: 'application/json',
processData: false,
data: JSON.stringify(this.vendor),
xhrFields: {
withCredentials: true
}
success: function(data) {
vm.set('vendor', data);
notification.show('Vendor created successfully', 'success');
vm.set('has_changes', false);
},
error: function() {
notification.show('Error creating vendor', 'error');
}
});
the api_address mentioned above is:
https://username:password#api.example.com
Why is this working for GET requests but not POST/PUT/etc?
Change your nginx to handle the CORS and don't have hapi worry about it. Taken and modified from http://enable-cors.org/server_nginx.html
location / {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
if ($request_method = 'OPTIONS') {
#
# Tell client that this pre-flight info is valid for 20 days
#
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
auth_basic "Restricted Content";
auth_basic_user_file .htpasswd;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass_request_headers on;
proxy_pass https://api.example.com;
}
Of course you would need more work if you are supporting PUT or DELETE.

Resources