I'm trying to test simple ping & pong responses with WebSockets by using Ratchet. My WebSocket server is not seeing any responses from my web browser's WebSocket client but packets sent by WebSocket server are seen fine by the browser. I'm trying to find why.
My current guesses are:
I'm missing some HTTP header(s)
I have to encode packets on browser wsclient.send(encodeFrame("{'action': 'pong'}"));
CloudFlare is not recognizing packet in WS stream as valid and thrashes it
CloudFlare or nginx in EC2 instance is doing some weird buffering
Ratchet is not recognizing packet on lowest IOServer level and thrashes it
But I never get any errors or Exceptions from this level
Setup:
Linux server # Amazon EC2
DNS # CloudFlare with free plan and acceleration on + forced HTTPS redirect on
HTTP server is nginx
No HTTPS on nginx (Cloudflare redirects HTTPS -> EC2 HTTP)
WebSocket server (Ratchet) running at 127.0.0.1:65000 on EC2
Nginx redirects /api to 127.0.0.1:65000 (Ratchet)
Tested with own WebSocket client at EC2 instance:
127.0.0.1:65000 works fine
<Amazon NAT IP of instance>:80 works fine
<Amazon public IP of instance>:80 works fine
<CloudFlare IP of Amazon public IP>:80 Connects to WebSocket server on Application implementatuin level but doesn't see packet on onMessage method on any level (App, WsServer, HTTPServer)
<CloudFlare IP of Amazon public IP>:443 Gives 400 Bad Request because test client is just simple TCP stream
Tested from local machine:
Directly connect to host offered by CloudFlare's cached IP. Dojox.Socket connects to wss://host/api. Connection is again seen on Application implementation level on Ratchet (onOpen is fired). My browser sees the ping packets fine so sending from Ratchet works fine.
But then I try to send pong reply to the ping from browser and onMessage method is not never fired on any level on Ratched. Connection keeps open and if I watch with Fiddler both pings and pongs are constantly sent but WebSocket server never receives those pongs (onMessage).
Following Fiddler's WebSocket stream shows that pongs from browser has "Data masked by key: <some hex>" but pings from Ratched are not masked.
Connection summary:
Page load:
Local machine http://host/ → CloudFlare HTTPS redirect https://host/ → http://host-with-amazon-public-ip/ → http://host-with-amazon-NAT-ip/ → HTML + JS page that loads wss WebSocket Connection to /api
WebSocket connection to /api:
CloudFlare HTTPS redirected wss://host/api → http://host-with-amazon-public-ip/api → http://host-with-amazon-NAT-ip/api → local server nginx redirect /api → 127.0.0.1:65000 → Connection upgrade → WebSocket stream → WebSocket stream for web browser
nginx.conf
server {
listen 80;
server_name test.example.com;
root /home/raspi/test/public;
autoindex off;
index index.php;
access_log /home/raspi/test/http-access.log;
error_log /home/raspi/test/http-error.log notice;
location / {
index index.php;
try_files $uri $uri/ /index.php?$args;
}
location /api {
access_log /home/raspi/test/api-access.log;
error_log /home/raspi/test/api-error.log;
expires epoch;
proxy_ignore_client_abort on;
proxy_buffering off;
proxy_request_buffering off;
proxy_cache off;
proxy_pass http://127.0.0.1:65000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Connection "keep-alive, Upgrade";
proxy_set_header Upgrade "websocket";
proxy_set_header Accept-Encoding "gzip, deflate";
proxy_set_header Sec-WebSocket-Extensions "permessage-deflate";
proxy_set_header Sec-WebSocket-Protocol "game";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ^~ /js/app/ {
try_files $uri /;
expires epoch;
add_header Cache-Control "no-cache" always;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ {
try_files $uri /;
access_log off;
expires max;
}
location = /robots.txt { access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }
location ~ /\. { access_log off; log_not_found off; deny all; }
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Web sockets are currently only available at the business and enterprise levels. You mentioned you're on the free plan. That's the issue here.
The problem is that your provider rejects Connection and Update headers. You can go to Ratchet/WebSocket/Version/RFC6455/HandshakeVerifier.php and modify the code to:
...
public function verifyAll(RequestInterface $request) {
$passes = 0;
$passes += (int)$this->verifyMethod($request->getMethod());
$passes += (int)$this->verifyHTTPVersion($request->getProtocolVersion());
$passes += (int)$this->verifyRequestURI($request->getPath());
$passes += (int)$this->verifyHost((string)$request->getHeader('Host'));
$passes += (int)$this->verifyUpgradeRequest((string)$request->getHeader('Upgrade'));
$passes += (int)$this->verifyConnection((string)$request->getHeader('Connection'));
die((string)$request);
...
And you will see the result request doesn't have the required fields;
The workaround: override WsServer::onOpen function and add that fields to request manually. But this is unsafe...
Related
I have 3 docker containers in the same network:
Storage (golang) - it provides API for uploading video files.
Streamer (nginx) - it streams uploaded files
Reverse Proxy (let's call it just Proxy)
I have HTTPS protocol between User and Proxy.
Let's assume that there is a file with id=c14de868-3130-426a-a0cc-7ff6590e9a1f and User wants to see it. So User makes a request to https://stream.example.com/hls/master.m3u8?id=c14de868-3130-426a-a0cc-7ff6590e9a1f. Streamer knows video id (from query param), but it doesn't know the path to the video, so it makes a request to the storage and exchanges video id for the video path. Actually it does proxy_pass to http://docker-storage/getpath?id=c14de868-3130-426a-a0cc-7ff6590e9a1f.
docker-storage in an upstream server. And protocol is http, because
I have no SSL-connection between docker containers in local network.
After Streamer gets path to the file it starts streaming. But User's browser start throwing Mixed Content Type error, because the first request was throw HTTPS and after proxy_pass it became HTTP.
Here is nginx.conf file (it is a Streamer container):
worker_processes auto;
events {
use epoll;
}
http {
error_log stderr debug;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
vod_mode local;
vod_metadata_cache metadata_cache 16m;
vod_response_cache response_cache 512m;
vod_last_modified_types *;
vod_segment_duration 9000;
vod_align_segments_to_key_frames on;
vod_dash_fragment_file_name_prefix "segment";
vod_hls_segment_file_name_prefix "segment";
vod_manifest_segment_durations_mode accurate;
open_file_cache max=1000 inactive=5m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
aio on;
upstream docker-storage {
# There is a docker container called storage on the same network
server storage:9000;
}
server {
listen 9000;
server_name localhost;
root /srv/static;
location = /exchange-id-to-path {
proxy_pass $auth_request_uri;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Original-URI $request_uri;
# I tried to experiment with this header
proxy_set_header X-Forwarded-Proto https;
set $filepath $upstream_http_the_file_path;
}
location /hls {
# I use auth_request module just to get the path from response header (The-File-Path)
set $auth_request_uri "http://docker-storage/getpath?id=$arg_id";
auth_request /exchange-id-to-path;
auth_request_set $filepath $upstream_http_the_file_path;
# Here I provide path to the file I want to stream
vod hls;
alias $filepath/$arg_id;
}
}
}
Here is a screenshot from browser console:
Here is the response from success (200) request:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1470038,RESOLUTION=1280x720,FRAME-RATE=25.000,CODECS="avc1.4d401f,mp4a.40.2"
http://stream.example.com/hls/index-v1-a1.m3u8
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=171583,RESOLUTION=1280x720,CODECS="avc1.4d401f",URI="http://stream.example.com/hls/iframes-v1-a1.m3u8"
The question is how to save https protocol after proxy_pass to http?
p.s. I use Kaltura nginx-vod-module for streaming video files.
I think proxy_pass isn't the problem here. When the vod module returns the index path it uses an absolute URL with HTTP protocol. A relative URL should be enough since the index file and the chunks are under the same domain (if I understood it correctly).
Try setting vod_hls_absolute_index_urls off; (and vod_hls_absolute_master_urls off; as well), so your browser should send requests relative to stream.example.com domain using HTTPS.
I am trying to get nginx to proxy a websocket connection to a backend server. All services linked via docker-compose.
When i create the WebSocket object in my frontend react app:
let socket = new WebSocket(`ws://engine/socket`)
I get the following error:
WebSocket connection to 'ws://engine/socket' failed: Error in connection establishment: net::ERR_NAME_NOT_RESOLVED
I believe the problem comes from converting ws:// to http:// and that my nginx configuration does not seem to be pick up the match location correctly.
Here is my nginx configuration:
server {
# listen on port 80
listen 80;
root /usr/share/nginx/html;
index index.html index.htm;
location ^~ /engine {
proxy_pass http://matching-engine:8081/;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
try_files $uri $uri/ /index.html;
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# Javascript and CSS files
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# Any route containing a file extension (e.g. /devicesfile.js)
location ~ ^.+\..+$ {
try_files $uri =404;
}
}
Here is part of my docker-compose configuration:
matching-engine:
image: amp-engine
ports:
- "8081:8081"
depends_on:
- mongodb
- rabbitmq
- redis
deploy:
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
client:
image: amp-client:latest
container_name: "client"
ports:
- "80:80"
depends_on:
- matching-engine
deploy:
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
docker-compose resolves the 'matching-engine' automatically (i can make normal http get/post requests that nginx resolves correctly, and nslookup finds the matching-engine correctly, so i believe the basic networking is working correctly for HTTP requests which leads me to think that the problem comes from the match location in the nginx configuration.
How can one pick up a request that originates from `new WebSocket('ws://engine/socket') in a location directive. I have tried the following ones:
location ^~ engine
location /engine
location /engine/socket
location ws://engine
without any success.
I have also tried changing new Websocket('ws://engine/socket') to new Websocket('/engine/socket') but this fails (only ws:// or wss:// prefixes are accepted)
What's the way to make this configuration work ?
As you are already exposing port 80 of your client container to your host via docker-compose, you could just connect to your websocket-proxy via localhost:
new Websocket('ws://localhost:80/engine')
UPDATE:
I solved the problem with the mixed content with a plugin
The main problem now is that when I go to the admin login page I get redirected to the login page inside <> instead my domain
I have a Rails application on Amazon Elastic Beanstalk, behind an Amazon Elastic Load Balancer.
On the same servers as the Rails application I have a nginx server with reverse proxy to a Wordpress blog on a different server. (So it can be accessed as example.com/blog)
Our domain is on GoDaddy, there I have a forwarding rule, from example.com to https://www.example.com. The domain itself is forwarded to the CNAME of the ELB
On the Load Balancer there is a listener for port 443 and it's forwarded to port 80
Inside the server I have a rule that is forcing a redirection from http to https
When I used a single server without the Load Balancer the reverse proxy worked flawlessly, but since I started using it, the blog's assets are not loaded properly and I get the mixed content error.
nginx config that works without the elb:
server {
listen 80;
server_name <<rails-app-domain>>;
if ($time_iso8601 ~ "^(d{4})-(d{2})-(d{2})T(d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
location /health_check {
access_log off;
return 200;
}
location / {
if ($http_x_forwarded_proto != 'https') {
return 301 https://$server_name$request_uri;
}
proxy_pass http://my_app;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ^~ /blog {
proxy_pass http://<<wordpress-server-ip>>/blog;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http://<<wordpress-server-ip>>/ https://$host/;
proxy_cookie_domain <<wordpress-server-ip>> $host;
}
location /assets {
alias /var/app/current/public/assets;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /public {
alias /var/app/current/public;
gzip_static on;
gzip on;
expires max;
add_header Cache-Control public;
}
location /cable {
proxy_pass http://my_app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
wordpress wp-config.php:
define('WP_SITEURL', 'https://<<rails-app-domain>>/blog');
define('WP_HOME', 'https://<<rails-app-domain>>/blog');
define('FORCE_SSL_ADMIN', true);
What I tried:
Setting a sub filter for http -> https rewrite rule for all the
locations inside /blog
A redirect rule for all the locations inside /blog from http to https
Adding a listener for port 443 in nginx and redirecting port
443 of the load balancer to port 443 of the server (instead of 80
like before)
Removing the domain forwarding on GoDaddy
Let's say we have the following quite minimal nginx.conf:
server {
listen 443 default ssl;
location /api/v1 {
proxy_pass http://127.0.0.1:8080;
}
}
Now, I'm trying to use nginx itself as an event-source. Another component in my system should be aware of any HTTP requests coming in, while ideally not blocking the traffic on this first proxy_pass directive.
Is there any possibility to have a second proxy_pass which "just" forwards the HTTP request to another component as well while completely ignoring the result of that forwarded request?
Edit: To state the requirement more clearly: What I want to achieve is that the same HTTP requests are sent to two different backend servers, only one of them really handling the connection in terms of nginx. The other should just be an "event ping" to notify the other service that there has been a request.
This can be done using echo_location directive (or similar, browse the directives) of the 3rd party Nginx Echo Module. You will need to compile Nginx with this module or use Openresty which is Nginx bundled with useful stuff such as this.
Outline code:
server {
[...]
location /main {
echo_location /sub;
proxy_pass http://main.server:PORT;
}
location /sub {
internal;
proxy_pass http://alt.server:PORT;
}
}
There is also the now undocumented post_action directive which does not require a third party module:
server {
[...]
location /main {
proxy_pass http://main.server:PORT;
post_action #sub;
}
location #sub {
proxy_pass http://alt.server:PORT;
}
}
This will fire a subrequest after the main request is completed. Here is an old answer where I recommended the use of this: NGinx - Count requests for a particular URL pattern.
However, this directive has been removed from the Nginx documentation and further usage of this is now a case of caveat emptor. Four years on from 2012 when I gave that answer, I wouldn't recommend using this.
I know this is done but I'd like to answer with the new, updated answer since it's turning up in searches 3 years later. The mirror module works wonderfully. I got this from the nginx docs so I assume it's official and available.
server {
[...]
location /main {
mirror /mirror
proxy_pass http://main.server:PORT;
}
location /mirror {
proxy_pass http://alt.server:PORT;
}
}
It's the big advantage of nginx: you can serve from multiple backend servers. You must just include one more location, referenced by another direction. Here you got a sample of my sites-available/default, that servers from fastcgi (monodevelop), glassfish (java), static content & special error threatment. Hope it helps.
#fastcgi
location ~* \.(aspx)$ {
root /home/published/;
index Default.aspx;
fastcgi_index Default.aspx;
fastcgi_pass 127.0.0.1:9000;
include /etc/nginx/fastcgi_params;
}
#Glassfish
location /GameFactoryService/ {
index index.html;
add_header Access-Control-Allow-Origin $http_origin;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:18000/GameFactoryService/;
}
#static content
location / {
root /usr/share/nginx_static_content;
}
error_page 500 501 502 503 504 505 506 507 508 509 510 511 /50x.html;
#error
location = /50x.html {
add_header Access-Control-Allow-Origin $http_origin;
internal;
}
I tried to make custom 404 page for tornado and want to deploy it with nginx but failed.
here is my domain.conf(included by nginx.conf)
server {
listen 80;
server_name vm.tuzii.me;
client_max_body_size 50M;
location ^~ /app/static/ {
root ~/dev_blog;
if ($query_string) {
expires max;
}
}
location = /favicon.ico {
rewrite (.*) /static/favicon.ico;
}
location = /robots.txt {
rewrite (.*) /static/robots.txt;
}
error_page 404 /404.html;
location /404.html {
root /home/scenk;
internal;
}
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://frontends;
}
}
But after reload nginx, nothing happen. It seems like tornado catch the 404error before nginx.
I have no idea to solve this problem.
PS. I just want to make 404error by nginx. But not rewrite 'write_error' in tornado source.
Environment: Ubtuntu 12.04 Tornado2.4.1 runsite with supervisor by Nginx 4 process.
I ran into the same problem and what you actually need is this set:
proxy_intercept_errors on;
From nginx proxy module documentation:
proxy_intercept_errors
Syntax: proxy_intercept_errors on | off
Default: off
Context: http
This directive decides if nginx will intercept responses with HTTP status codes of 400 and higher.
By default all responses will be sent as-is from the proxied server.
If you set this to on then nginx will intercept status codes that are explicitly handled by an error_page directive. Responses with status codes that do not match an error_page directive will be sent as-is from the proxied server.
Finailly solve this problem. Because
proxy_pass_header Server;
So the real TornadoServer is sent. To hide real server, simply change
proxy_pass_header User-Agent;
That's all.