RabbitMQ WebSocket 404 Not Found - nginx

I'm running rabbitmq-server v3.3.5-1.1 on the Debian v8.2. I have enabled rabbitmq_web_stomp and rabbitmq_web_stomp_examples as per suggestion in the docs:
rabbitmq-plugins enable rabbitmq_web_stomp
rabbitmq-plugins enable rabbitmq_web_stomp_examples
All examples exposed at http://127.0.0.1:15670 work as intended, but they all use SockJS rather than native browser's WebSocket:
// Stomp.js boilerplate
var ws = new SockJS('http://' + window.location.hostname + ':15674/stomp');
var client = Stomp.over(ws);
I would like to stick to the WebSocket so I tried what was suggested in the docs:
var ws = new WebSocket('ws://127.0.0.1:15674/ws');
This throws an error to my face:
WebSocket connection to 'ws://127.0.0.1:15674/ws' failed: Error during WebSocket handshake: Unexpected response code: 404
Further tests with netcat confirm 404:
# netcat -nv 127.0.0.1 15674
127.0.0.1 15674 open
GET /ws HTTP/1.1
Host: 127.0.0.1
HTTP/1.1 404 Not Found
Connection: close
Content-Length: 0
Date: Sat, 23 Jan 2016 20:15:13 GMT
Server: Cowboy
Obviously cowboy does not expose /ws path, so I wonder:
Is it possible to reconfigure cowboy in this situation? How? Is it worth it?
May I use nginx in the place of the cowboy (preferred option)? How?
What other options do I have?
EDIT
RabbitMQ docs are misleading. Correct WebSocket URI:
http://127.0.0.1:15674/stomp/websocket

good job, but:
new WebSocket('http://127.0.0.1:15674/stomp/websocket')
VM98:2 Uncaught DOMException: Failed to construct 'WebSocket': The URL's scheme must be either 'ws' or 'wss'. 'http' is not allowed.(…)(anonymous function) ...
need to use ws/wss-schema:
new WebSocket('ws://127.0.0.1:15674/stomp/websocket')
WebSocket {url: "ws://127.0.0.1:15674/stomp/websocket", readyState: 0, bufferedAmount: 0, onopen: null, onerror: null…}

Related

Discovering nsqd server address from nslookupd

I'm running an nsq cluster in Docker containers using the following docker-compose.yaml file:
version: '2'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160"
- "4161:4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --data-path=/data
volumes:
- data:/data
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
ports:
- "4171:4171"
volumes:
data:
Everything runs fine. But, if I call the /nodes endpoint on the nsqdlookup server I get this:
$ http http://localhost:4161/nodes
HTTP/1.1 200 OK
Content-Length: 238
Content-Type: application/json; charset=utf-8
Date: Tue, 24 Jan 2017 08:44:27 GMT
{
"data": {
"producers": [
{
"broadcast_address": "7dd3d550e7f8",
"hostname": "7dd3d550e7f8",
"http_port": 4151,
"remote_address": "172.18.0.4:57156",
"tcp_port": 4150,
"tombstones": [],
"topics": [],
"version": "0.3.8"
}
]
},
"status_code": 200,
"status_txt": "OK"
}
The broadcast address looks like the container's name/hostname. I tried to ping on port 4151 it just in case, but it fails.
> http http://7dd3d550e7f8:4151/ping
http: error: ConnectionError: HTTPConnectionPool(host='7dd3d550e7f8', port=4151): Max retries exceeded with url: /ping (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000001C397173EF0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed',)) while doing GET request to URL: http://7dd3d550e7f8:4151/ping
Same for the remote address:
> http http://172.18.0.4:4151/ping
http: error: ConnectionError: HTTPConnectionPool(host='172.18.0.4', port=4151): Max retries exceeded with url: /ping (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000001C0D9545F28>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)) while doing GET request to URL: http://172.18.0.4:4151/ping
Everything works if I use localhost or 127.0.0.1:
> http http://localhost:4151/ping
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: text/plain; charset=utf-8
Date: Tue, 24 Jan 2017 08:51:30 GMT
OK
But, that's cheating. The whole point of the nsqlookupd servers is that they keep track on the nsqd servers so clients can dynamically get a list of responsive servers.
Is it possible to an accessible URL/IP address for nsqd nodes from nslookupd server when the nsqd nodes are running in Docker containers?
Is there some magic incantation to make it work?
Did someone try maybe using Swarm or Kubernetes?
I found that GKE now supports StatefulSet at 1.5.2
It means your nsqd, nsqlookupd can be spin to as SS instances. Now you can use -broadcast-address=$POD_IP from downward api and your producers will be able to publish to nsq-0.nsq-service-name, nsq-1.nsq-service-name etc., while consumers will get advertised nsqd IP address from nsqlookupd. That works for us. Just managed to make it to work today

HTTP Client putAsync (Error 404 Method not Allowed)

I try to update an object with WEB API hosted on a remote server. I recover well the object but at the time of the change, the answer gives 404 method not Allowed. I tested to host my service in a machine of my colleague just nearby. It works well. Is what is needed to make a configuration or something else?
Thank you a lot.
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage responsse = client.PutAsJsonAsync("api/Collaborateurs/" + coll.matricule_collaborateur, coll).Result;
if (responsse.IsSuccessStatusCode) {
}
Error:
{StatusCode: 405, ReasonPhrase: 'Method Not Allowed', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Date: Thu, 12 Nov 2015 14:28:10 GMT
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Content-Length: 1343
Allow: GET
Allow: HEAD
Allow: OPTIONS
Allow: TRACE
Content-Type: text/html
}}
WebDAV is know to interfere with the verb PUT. Try uninstalling WebDAV if it is present on the server and you do not use it.
It may also be that you need to contact the remote server's administrator. PUT is sometimes blocked by network switches/routers. I.e. there well may be nothing you can do to fix this with code. Try using POST instead.

Getting 404 error if requesting a page through proxy, but 200 if connecting directly

I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT

NGINX + uWSGI Connection Reset by Peer

I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192

HTTP over TCP using Telnet/Hercules/Raw socket/

I'm connecting to real-time data on a remote server as a client. I want to send the following to a server and keep the connection open. This is a 'push' protocol.
http://server.domain.com:80/protocol/dosomething.txt?POSTDATA=thePostData
I can call this in a browser and it's fine. However, if I try to use telnet directly in a windows command prompt, the prompt just exits.
GET protocol/dosomething.txt?POSTDATA=thePostData
The same is the case if I use Putty.exe and select Telnet as the protocol. I can't see a way to do this with Hercules at all, as I don't think the server will interpret the GET
Is there any way I can do this?
Thanks.
You have to match the HTTP protocol (RFC2616) to the letter if you want to use telnet. Try something like:
shell$ telnet www.google.com 80
Trying 173.194.43.50...
Connected to www.google.com (173.194.43.50).
Escape character is '^]'.
GET / HTTP/1.1
Host: www.google.com:80
Connection: close
HTTP/1.1 200 OK
Date: Tue, 11 Sep 2012 15:09:51 GMT
...
You need to type the following lines including an "empty line" following the "Connection" line.
GET / HTTP/1.1
Host: www.google.com:80
Connection: close

Resources