CORS issue with Server sent event with Fastify-sse - server-sent-events

I'm using a Fastify server to send SSE events to a React front-end.
While everything worked well locally, I'm having issues once deployed behind Nginx. The front-end and the server aren't on the same domain and although I set the cors origin to be "*" on the server, and the other call resolve without issue, for the server-sent-events endpoint only I get
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://example.com/events. (Reason: CORS request did not succeed). Status code: (null).
Here's how Fastify is configured, using #fastify/cors and fastify-sse-v2
import fastifyCors from "#fastify/cors";
import FastifySSEPlugin from "fastify-sse-v2";
// ...
await this.instance.register(FastifySSEPlugin, { logLevel: "debug" });
await this.instance.after();
await this.instance.register(fastifyCors, { origin: "*" });
Then sending events based on postgres pubsub with:
await pubsub.addChannel(TxUpdateChannel);
reply.sse(
(async function* () {
for await (const [event] of on(pubsub, TxUpdateChannel)) {
yield {
event: event.name,
data: JSON.stringify(event.data),
};
}
})()
);
On the front-end I use eventsource so that I can add Authorization headers:
import EventSource from "eventsource";
// ...
const source = new EventSource(`${SERVER_URL}/transaction/events`, {
headers: {
'Authorization': `Bearer ${jwt}`,
},
});
source.onmessage = (event) => {
console.log('got message', event)
getUserData()
}
source.onopen = (event) => {
console.log('---> open', event)
}
source.onerror = (event) => {
console.error('Event error', event)
}

The problem was in the nginx configuration. Thanks to EventSource / Server-Sent Events through Nginx I solved it using the following placed into the location of the nginx conf :
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;

Related

Vue.js-Electron application with an Apollo client does not receive cookies from a remote API server

I try to build an application with a GraphQL backend (based on Node.js and graphql-yoga). The server is hosted on a Linux machine with nginx as a reverse proxy. The proxy configuration is listed below.
server {
listen 443 ssl;
listen [::]:443 ssl;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
server_name url.example.com;
charset utf-8;
# Always serve index.html for any request
location / {
root /srv/app/public;
try_files $uri /index.html;
}
location /graphql {
allow all;
proxy_pass http://localhost:3000$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /playground {
# Additional configuration
}
location /subscriptions {
# Additional configuration
}
error_log /var/log/nginx/vue-app-error.log;
access_log /var/log/nginx/vue-app-access.log;
}
server {
listen 80;
listen [::]:80;
server_name url.example.com;
return 302 https://$server_name$request_uri;
To avoid CORS problems, I had to add the following entry in my servers configuration and it works.
const options = {
[
...
]
/**
* Cross-Origin Resource Sharing (CORS)
*/
cors: {
credentials: true,
origin: ['http://localhost:8080', 'https://url.example.com'], // frontend url
},
};
The authentication is based on JWT tokens packed into http-only cookies for access and refresh tokens. This is working as expected when I access my server with a browser and the cookies are shown in the developer tools. Within the AuthData, I additionally provide the username and a JWT token as a return value.
{
"data":{
"login":{
"user":"admin",
"name":" Admin",
"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VySWQiOiJhZG1pbiIsIm5hbWUiOiIgQWRtaW5pc3RyYXRvciIsImVtYWlsIjoiYWRtaW5AZXhhbXBsZS5jb20iLCJpYXQiOjE2NDg0ODM2MDcsImV4cCI6MTY0ODQ4NzIwN30.Dol41LStkscXrlGn3GJotf83k_d2EImyvDU68Dg8Bvw"
}
},
"loading":false,
"networkStatus":7
}
Now I wanted to play around with Electron, because some use cases of the application would be much easier. The stack is Vue.js and Electron with Tailwind and the Apollo client (I use the Apollo UploadClient here) as listed below.
import {
ApolloClient,
ApolloLink,
DefaultOptions,
InMemoryCache,
} from '#apollo/client/core';
import { setContext } from '#apollo/client/link/context';
import { createUploadLink } from 'apollo-upload-client';
const backendServer = 'https://url.example.com/graphql';
const httpLink = createUploadLink({
uri: backendServer,
credentials: 'include',
});
// Cache implementation
const cache = new InMemoryCache({
addTypename: false,
}); //
const defaultOptions = {
watchQuery: {
fetchPolicy: 'no-cache',
},
query: {
fetchPolicy: 'no-cache',
errorPolicy: 'all',
},
mutate: {
errorPolicy: 'all',
},
} as DefaultOptions;
const authLink = setContext((_, { headers }) => ({
headers: {
...headers,
/**
* use a custom header tag to classify the application
* for authentication handling in the backend
*/
'app-type': 'web',
},
}));
const apolloClient = new ApolloClient({
link: ApolloLink.from([authLink, httpLink]),
cache,
defaultOptions,
resolvers: {},
});
export default apolloClient;
This client is bound to my Vue.js instance via #vue/apollo-composable. So I think it is only available in the renderer process. React examples I found on the Internet were built the same way.
Now the problem: When I run server and client (Electron) application on my development machine, everything works as expected, and I can see the cookies in the application tab in the developer tools. I also can access the API without issues.
When I now bind my ApolloClient to the remote API server I do not receive any cookies. From the logs I can see that I receive the AuthData like above from the server, but not the cookies. So any further request results in a custom error message "unauthenticated access" provided by my API server.
{
"loading":false,
"networkStatus":8,
"error":{
"graphQLErrors":[
{
"message":"user is not authenticated...redirect to the login page",
"locations":[
{"line":2,"column":3}
],
"path":[
"users"
]
}
],
"clientErrors":[],
"networkError":null,
"message":"user is not authenticated...redirect to the login page"
},"errors":[
{
"message":"user is not authenticated...redirect to the login page",
"locations":[
{"line":2,"column":3}
],
"path":[
"users"
]
}
]
}
What am I missing or doing wrong, or what else could I test?
Additional tested direct access
I additionally tested direct accessing the gql API from the electron client without the nginx proxy but still no success. I get the login correctly but I'm not able to see the cookies provided by the server.
any suggestions?

Why is Nginx truncating the gRPC streaming response?

I've asked this question before but decided to delete that old question and reformulate it along with a minimum reproducible example. The issue is that when I deploy my gunicorn webserver on nginx, my streamed responses from my go server via gRPC get truncated. All details can be found in the repository. My nginx configuration for this site looks like this:
server {
listen 80 default_server;
server_name example.com;
location / {
#include proxy_params;
proxy_pass http://localhost:5000;
proxy_buffering off;
chunked_transfer_encoding off;
}
}
The code receiving and parsing the response on the front end looks like this:
<script>
(async function(){
const response = await fetch("{{ url_for('realtimedata') }}");
const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
while (true) {
const {done, value} = await reader.read();
if (done) break;
try {
console.log('Received', value);
const rtd = JSON.parse(value);
console.log('Parsed', rtd);
} catch(err) {
console.log(err);
}
}
})()
</script>
Something to note regarding the data from the go server, one service is providing a data object with 96 fields and another service is providing data with 200 fields. Which makes the incoming stream response have variable length (in terms of bytes).
I want to use gunicorn because I may have multiple listeners at the same time. Using gunicorn solved an issue where all the responses were making it to the webserver but they were being distributed among the active clients. So each client would get a different response but not all of them.
EDIT:
I've tried changing the response object size on the goserver to be the same from both services but the truncating still happened. Having variable length doesn't seem to be the issue. I've also tried doing this with uWSGI instead of gunicorn and the issue persists. I even set uwsgi_buffering off; and the issue persists.
UPDATE:
I've ran the minimum reproducible example with Apache2 instead of Nginx and I'm getting the same issue. Maybe the issue is with something else.
Looking at your python code, it seems like pushing the data from the backend to the frontend would be done better with websockets. I've rewritten your backend to use FastAPI instead of Flask and modified the nginx configuration.
main.py
import asyncio
import dependencies.rpc_pb2 as r
import dependencies.rpc_pb2_grpc as rpc
from fastapi import FastAPI, WebSocket, Request
from fastapi.templating import Jinja2Templates
import grpc
import json
import os
os.environ["GRPC_SSL_CIPHER_SUITES"] = 'HIGH+ECDSA'
app = FastAPI()
templates = Jinja2Templates(directory="templates")
server_addr = "localhost"
server_port = 3567
#app.get("/")
def read_root(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
def parseRtd(rtd):
rtdDict = {}
rtdDict["source"] = rtd.source
rtdDict["is_scanning"] = rtd.is_scanning
rtdDict["timestamp"] = int(rtd.timestamp)
rtdDict["data"] = {}
for key, v in rtd.data.items():
rtdDict["data"][int(key)] = {"name": v.name, "value": v.value}
return rtdDict
def get_rtd():
channel = grpc.insecure_channel(f"{server_addr}:{server_port}")
stub = rpc.RpcServiceStub(channel)
for rtd in stub.SubscribeDataStream(r.SubscribeDataRequest()):
yield parseRtd(rtd)
#app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
await websocket.send_json({"test": "this is a test"})
it = get_rtd()
while True:
await asyncio.sleep(0.1)
payload = next(it)
await websocket.send_json(payload)
index.html
<html>
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.4.0/socket.io.js" integrity="sha512-nYuHvSAhY5lFZ4ixSViOwsEKFvlxHMU2NHts1ILuJgOS6ptUmAGt/0i5czIgMOahKZ6JN84YFDA+mCdky7dD8A==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
</head>
<body>
<script>
var ws = new WebSocket("ws://localhost:5000/ws");
ws.onopen = function () {
console.log("websocket was open");
};
ws.onclose = () => {
console.log("Websocket was closed!");
}
ws.onerror = (error) =>{
console.error("Websocket error: " + JSON.stringify(error));
};
ws.onmessage = (message) => {
console.log("MSG: " + message.data );
};
</script>
</body>
</html>
webserver.conf
server {
listen 80 default_server;
server_name example.com;
location / {
include proxy_params;
proxy_pass http://localhost:5000;
}
location /ws {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://localhost:5000;
}
}

How to turn off buffering on Nginx Server for Server sent event

Problem : Nginx Server is buffring the Server sent events(SSE).
Setup : Node v12.13.1, Nginx 1.16.1, Chrome v80
Scenario:
I tried to turn off buffering with proxy_buffering off; and even added "X-Accel-Buffering": "no" in server resonse header however nginx is still buffering all SSE. if I close node server or restart nginx server then all the SSE message are delivered to client in bulk. I tried alot but dont know that I'm missing.
Nginx Config file :
events {
worker_connections 1024;
}
http {
include mime.types;
sendfile on;
keepalive_timeout 65;
server {
listen 4200;
server_name localhost;
location / {
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
proxy_pass http://localhost:8700;
}
}
}
Node Server :
var express = require('express');
var app = express();
var template =
`<!DOCTYPE html> <html> <body>
<script type="text/javascript">
var source = new EventSource("/events/");
source.onmessage = function(e) {
document.body.innerHTML += e.data + "<br>";
};
</script>
</body> </html>`;
app.get('/', function (req, res) {
res.send(template); // <- Return the static template above
});
var clientId = 0;
var clients = {}; // <- Keep a map of attached clients
// Called once for each new client. Note, this response is left open!
app.get('/events/', function (req, res) {
req.socket.setTimeout(Number.MAX_VALUE);
res.writeHead(200, {
'Content-Type': 'text/event-stream', // <- Important headers
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'X-Accel-Buffering': 'no'
});
res.write('\n');
(function (clientId) {
clients[clientId] = res; // <- Add this client to those we consider "attached"
req.on("close", function () {
delete clients[clientId]
}); // <- Remove this client when he disconnects
})(++clientId)
});
setInterval(function () {
var msg = Math.random();
console.log("Clients: " + Object.keys(clients) + " <- " + msg);
for (clientId in clients) {
clients[clientId].write("data: " + msg + "\n\n"); // <- Push a message to a single attached client
};
}, 2000);
app.listen(process.env.PORT || 8700);

Socket.io nginx Redis - Client not receiving messages from server

I'm quite confused. I was testing this application on localhost with Mamp and everything was working fine but when moved to the development server the client stop receiving messages from server. I'm using it inside a Vuejs component.
On the client I've logged socket.on('connect') and the second check is returning true.
This is my code:
Server
var server = require ('http').Server();
var io = require ('socket.io')(server);
var Redis = require ('ioredis');
var redis = new Redis();
redis.subscribe('chat');
redis.on('message', (channel, message) => {
message = JSON.parse(message);
// channel:event:to_id:to_type - message.data
io.emit(channel + ':' + message.event + ':' + message.to_id + ':' + message.to_type, message.data);
console.log(message +' '+ channel);
});
server.listen('6001');
Client
var io = require('socket.io-client')
var socket = io.connect('http://localhost:6001', {reconnect: true});
...
mounted() {
console.log('check 1', socket.connected);
socket.on('connect', function() {
console.log('check 2', socket.connected);
});
socket.on('chat:newMessage:'+this.fromid+':'+this.fromtype, (data) => {
console.log('new message');
var message = {
'msg': data.message,
'type': 'received',
'color': 'green',
'pos': 'justify-content-start',
}
this.messages.push(message);
});
}
Nginx conf
upstream node1 {
server 127.0.0.1:6001;
}
server {
listen 80;
server_name localhost;
location / {
#Configure proxy to pass data to upstream node1
proxy_pass http://node1/socket.io/;
#HTTP version 1.1 is needed for sockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
thanks a lot!
Ok I've found a solution.
I was using localhost
I've replaced
var socket = io.connect('http://localhost:6001', {reconnect: true});
With:
var socket = io.connect('http://'+ window.location.hostname +':6001', {reconnect: true});
And now everything is working fine.

HTTPS Proxy Server in node.js

I am developing a node.js proxy server application and I want it to support HTTP and HTTPS(SSL) protocols (as server).
I'm currently using node-http-proxy like this:
const httpProxy = require('http-proxy'),
http = require('http');
var server = httpProxy.createServer(9000, 'localhost', function(req, res, proxy) {
console.log(req.url);
proxy.proxyRequest(req, res);
});
http.createServer(function(req, res) {
res.end('hello!');
}).listen(9000);
server.listen(8000);
I setup my browser to use HTTP proxy on localhost:8000 and it works. I also want to catch HTTPS requests (ie. setup my browser to use localhost:8000 as HTTPS proxy as well and catch the requests in my application). Could you please help me how can I do that?
PS:
If I subscribe to upgrade event of httpProxy server object I can get the requests but I don't know how to forward the request and send response to client:
server.on('upgrade', function(req, socket, head) {
console.log(req.url);
// I don't know how to forward the request and send the response to client
});
Any helps would be appreciated.
Solutions barely exist for this, and the documentation is poor at best for supporting both on one server. The trick here is to understand that client proxy configurations may send https requests to an http proxy server. This is true for Firefox if you specify an HTTP proxy and then check "same for all protocols".
You can handle https connections sent to an HTTP server by listening for the "connect" event. Note that you won't have access to the response object on the connect event, only the socket and bodyhead. Data sent over this socket will remain encrypted to you as the proxy server.
In this solution, you don't have to make your own certificates, and you won't have certificate conflicts as a result. The traffic is simply proxied, not intercepted and rewritten with different certificates.
// Install npm dependencies first
// npm init
// npm install --save url#0.10.3
// npm install --save http-proxy#1.11.1
var httpProxy = require("http-proxy");
var http = require("http");
var url = require("url");
var net = require('net');
var server = http.createServer(function (req, res) {
var urlObj = url.parse(req.url);
var target = urlObj.protocol + "//" + urlObj.host;
console.log("Proxy HTTP request for:", target);
var proxy = httpProxy.createProxyServer({});
proxy.on("error", function (err, req, res) {
console.log("proxy error", err);
res.end();
});
proxy.web(req, res, {target: target});
}).listen(8080); //this is the port your clients will connect to
var regex_hostport = /^([^:]+)(:([0-9]+))?$/;
var getHostPortFromString = function (hostString, defaultPort) {
var host = hostString;
var port = defaultPort;
var result = regex_hostport.exec(hostString);
if (result != null) {
host = result[1];
if (result[2] != null) {
port = result[3];
}
}
return ( [host, port] );
};
server.addListener('connect', function (req, socket, bodyhead) {
var hostPort = getHostPortFromString(req.url, 443);
var hostDomain = hostPort[0];
var port = parseInt(hostPort[1]);
console.log("Proxying HTTPS request for:", hostDomain, port);
var proxySocket = new net.Socket();
proxySocket.connect(port, hostDomain, function () {
proxySocket.write(bodyhead);
socket.write("HTTP/" + req.httpVersion + " 200 Connection established\r\n\r\n");
}
);
proxySocket.on('data', function (chunk) {
socket.write(chunk);
});
proxySocket.on('end', function () {
socket.end();
});
proxySocket.on('error', function () {
socket.write("HTTP/" + req.httpVersion + " 500 Connection error\r\n\r\n");
socket.end();
});
socket.on('data', function (chunk) {
proxySocket.write(chunk);
});
socket.on('end', function () {
proxySocket.end();
});
socket.on('error', function () {
proxySocket.end();
});
});
Here is my NO-dependencies solution (pure NodeJS system libraries):
const http = require('http')
const port = process.env.PORT || 9191
const net = require('net')
const url = require('url')
const requestHandler = (req, res) => { // discard all request to proxy server except HTTP/1.1 CONNECT method
res.writeHead(405, {'Content-Type': 'text/plain'})
res.end('Method not allowed')
}
const server = http.createServer(requestHandler)
const listener = server.listen(port, (err) => {
if (err) {
return console.error(err)
}
const info = listener.address()
console.log(`Server is listening on address ${info.address} port ${info.port}`)
})
server.on('connect', (req, clientSocket, head) => { // listen only for HTTP/1.1 CONNECT method
console.log(clientSocket.remoteAddress, clientSocket.remotePort, req.method, req.url)
if (!req.headers['proxy-authorization']) { // here you can add check for any username/password, I just check that this header must exist!
clientSocket.write([
'HTTP/1.1 407 Proxy Authentication Required',
'Proxy-Authenticate: Basic realm="proxy"',
'Proxy-Connection: close',
].join('\r\n'))
clientSocket.end('\r\n\r\n') // empty body
return
}
const {port, hostname} = url.parse(`//${req.url}`, false, true) // extract destination host and port from CONNECT request
if (hostname && port) {
const serverErrorHandler = (err) => {
console.error(err.message)
if (clientSocket) {
clientSocket.end(`HTTP/1.1 500 ${err.message}\r\n`)
}
}
const serverEndHandler = () => {
if (clientSocket) {
clientSocket.end(`HTTP/1.1 500 External Server End\r\n`)
}
}
const serverSocket = net.connect(port, hostname) // connect to destination host and port
const clientErrorHandler = (err) => {
console.error(err.message)
if (serverSocket) {
serverSocket.end()
}
}
const clientEndHandler = () => {
if (serverSocket) {
serverSocket.end()
}
}
clientSocket.on('error', clientErrorHandler)
clientSocket.on('end', clientEndHandler)
serverSocket.on('error', serverErrorHandler)
serverSocket.on('end', serverEndHandler)
serverSocket.on('connect', () => {
clientSocket.write([
'HTTP/1.1 200 Connection Established',
'Proxy-agent: Node-VPN',
].join('\r\n'))
clientSocket.write('\r\n\r\n') // empty body
// "blindly" (for performance) pipe client socket and destination socket between each other
serverSocket.pipe(clientSocket, {end: false})
clientSocket.pipe(serverSocket, {end: false})
})
} else {
clientSocket.end('HTTP/1.1 400 Bad Request\r\n')
clientSocket.destroy()
}
})
I tested this code with Firefox Proxy Settings (it even asks for username and password!). I entered IP address of machine where this code is runned and 9191 port as you can see in the code. I also set "Use this proxy server for all protocols". I run this code locally and on VPS - in both cases works!
You can test your NodeJS proxy with curl:
curl -x http://username:password#127.0.0.1:9191 https://www.google.com/
I have created a http/https proxy with the aid of the http-proxy module: https://gist.github.com/ncthis/6863947
Code as of now:
var fs = require('fs'),
http = require('http'),
https = require('https'),
httpProxy = require('http-proxy');
var isHttps = true; // do you want a https proxy?
var options = {
https: {
key: fs.readFileSync('key.pem'),
cert: fs.readFileSync('key-cert.pem')
}
};
// this is the target server
var proxy = new httpProxy.HttpProxy({
target: {
host: '127.0.0.1',
port: 8080
}
});
if (isHttps)
https.createServer(options.https, function(req, res) {
console.log('Proxying https request at %s', new Date());
proxy.proxyRequest(req, res);
}).listen(443, function(err) {
if (err)
console.log('Error serving https proxy request: %s', req);
console.log('Created https proxy. Forwarding requests from %s to %s:%s', '443', proxy.target.host, proxy.target.port);
});
else
http.createServer(options.https, function(req, res) {
console.log('Proxying http request at %s', new Date());
console.log(req);
proxy.proxyRequest(req, res);
}).listen(80, function(err) {
if (err)
console.log('Error serving http proxy request: %s', req);
console.log('Created http proxy. Forwarding requests from %s to %s:%s', '80', proxy.target.host, proxy.target.port);
});
The node-http-proxy docs contain examples of this. Look for "Proxying to HTTPS from HTTPS" at https://github.com/nodejitsu/node-http-proxy The configuration process is slightly different in every browser. Some have the option to use your proxy settings for all protocols; some you need to configure the SSL proxy separately.

Resources