I'm running an nsq cluster in Docker containers using the following docker-compose.yaml file:
version: '2'
services:
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160"
- "4161:4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --data-path=/data
volumes:
- data:/data
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
ports:
- "4171:4171"
volumes:
data:
Everything runs fine. But, if I call the /nodes endpoint on the nsqdlookup server I get this:
$ http http://localhost:4161/nodes
HTTP/1.1 200 OK
Content-Length: 238
Content-Type: application/json; charset=utf-8
Date: Tue, 24 Jan 2017 08:44:27 GMT
{
"data": {
"producers": [
{
"broadcast_address": "7dd3d550e7f8",
"hostname": "7dd3d550e7f8",
"http_port": 4151,
"remote_address": "172.18.0.4:57156",
"tcp_port": 4150,
"tombstones": [],
"topics": [],
"version": "0.3.8"
}
]
},
"status_code": 200,
"status_txt": "OK"
}
The broadcast address looks like the container's name/hostname. I tried to ping on port 4151 it just in case, but it fails.
> http http://7dd3d550e7f8:4151/ping
http: error: ConnectionError: HTTPConnectionPool(host='7dd3d550e7f8', port=4151): Max retries exceeded with url: /ping (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000001C397173EF0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed',)) while doing GET request to URL: http://7dd3d550e7f8:4151/ping
Same for the remote address:
> http http://172.18.0.4:4151/ping
http: error: ConnectionError: HTTPConnectionPool(host='172.18.0.4', port=4151): Max retries exceeded with url: /ping (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x000001C0D9545F28>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)) while doing GET request to URL: http://172.18.0.4:4151/ping
Everything works if I use localhost or 127.0.0.1:
> http http://localhost:4151/ping
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: text/plain; charset=utf-8
Date: Tue, 24 Jan 2017 08:51:30 GMT
OK
But, that's cheating. The whole point of the nsqlookupd servers is that they keep track on the nsqd servers so clients can dynamically get a list of responsive servers.
Is it possible to an accessible URL/IP address for nsqd nodes from nslookupd server when the nsqd nodes are running in Docker containers?
Is there some magic incantation to make it work?
Did someone try maybe using Swarm or Kubernetes?
I found that GKE now supports StatefulSet at 1.5.2
It means your nsqd, nsqlookupd can be spin to as SS instances. Now you can use -broadcast-address=$POD_IP from downward api and your producers will be able to publish to nsq-0.nsq-service-name, nsq-1.nsq-service-name etc., while consumers will get advertised nsqd IP address from nsqlookupd. That works for us. Just managed to make it to work today
Related
I've been following a blog on how to compile modsecurity with nginx, Blog. I tried to verify that everything works with creating the file /etc/nginx/conf.d/echo.conf which contains:
server {
listen localhost:8085;
location / {
default_type text/plain;
return 200 "Thank you for requesting ${request_uri}\n";
}
}
I ran the following in cmd:
sudo nginx -s reload
curl -D - http://localhost:8085 HTTP/1.1 200 OK
and I got
HTTP/1.1 200 OK
Server: nginx/1.19.0
Date: Wed, 10 Jun 2020 19:31:08 GMT
Content-Type: text/plain
Content-Length: 27
Connection: keep-alive
Thank you for requesting /
curl: (6) Could not resolve host: HTTP
I have been on this for hours and can't figure out what to do. The two solutions I've found were
IPv6 enabled
Wrong DNS server
I ran the command in cmd with --ipv4 curl --ipv4 -D - http://localhost:8085 HTTP/1.1 200 OK with no success.
I also changed the nameserver in /etc/resolv.conf to 8.8.8.8 instead of 127.0.0.53 which also didn't work.
Any clues on what to do?
That error message spawns due to the command syntax you used. When using curl it should be enough by running:
curl -D - http://localhost:8085
To make a HTTP request to the webserver you define (localhost in this case). Otherwise it will take additional arguments as extra URLs to query if there are not additional options to parse, so it is trying to query HTTP as if you typed http://HTTP, which simply will not work, at least until you define a specific entry for HTTP host in your /etc/hosts for example.
I am facing a rather tricky issue, where it appears that the varnish is closing the backend connection without waiting for a respones from the backend.
We are using Nginx to serve static content Below is the sequence of messages
Varnish sends POST request to App
App sends back 500 Internal Server Error
Varnish interprets the 500 internal Server Error (to display static error page)
Varnish sends GET request to Nginx server (on the same server) to serve static content
Varnish shows following error message (even though Nginx sends the response successfully within milliseconds)
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 38 boot.staticpages 127.0.0.1 82 127.0.0.1 35064
- BackendStart 127.0.0.1 82
- FetchError backend write error: 0 (Success)
- Timestamp Bereq: 1543420795.016075 5.106813 0.000099
- BackendClose 38 boot.staticpages
- Timestamp Beresp: 1543420795.016497 5.107235 0.000422
- Timestamp Error: 1543420795.016503 5.107241 0.000005
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 28 Nov 2018 15:59:55 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
Varnish then again goes the same Nginx server to display default content.
Nginx sends response and varnish accepts it and sends it back to the customer
It appears that the backend connection gets closed pretty quickly
Any help in this regard is highly appreciated
Thanks,
We resolve the issue and below is the summary of what the issue was and how we resolved it;
Issue Summary:
Varnish is displaying backend fetch error when original POST request results in 500 Internal Error and backend_response is used to GET staticpage customized 500 Internal Server Error Message
VarnishLog Output (only relevant message):
It can be seen that Backend is being closed as soon as the request is sent.
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 24 boot.staticpages 127.0.0.1 82 127.0.0.1 40696
- BackendStart 127.0.0.1 82
- FetchError backend write error: 0 (Success)
- Timestamp Bereq: 1543416195.877756 5.116981 0.000046
- BackendClose 24 boot.staticpages
- Timestamp Beresp: 1543416195.877888 5.117113 0.000132
- Timestamp Error: 1543416195.877892 5.117117 0.000004
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 28 Nov 2018 14:43:15 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
Root Cause:
Varnish can't retry because there's no body to send anymore.
Resolution:
Cache the body of the original request by using std.cache_req_body(10KB); https://varnish-cache.org/docs/trunk/reference/vmod_generated.html#func-cache-req-body
I'm running rabbitmq-server v3.3.5-1.1 on the Debian v8.2. I have enabled rabbitmq_web_stomp and rabbitmq_web_stomp_examples as per suggestion in the docs:
rabbitmq-plugins enable rabbitmq_web_stomp
rabbitmq-plugins enable rabbitmq_web_stomp_examples
All examples exposed at http://127.0.0.1:15670 work as intended, but they all use SockJS rather than native browser's WebSocket:
// Stomp.js boilerplate
var ws = new SockJS('http://' + window.location.hostname + ':15674/stomp');
var client = Stomp.over(ws);
I would like to stick to the WebSocket so I tried what was suggested in the docs:
var ws = new WebSocket('ws://127.0.0.1:15674/ws');
This throws an error to my face:
WebSocket connection to 'ws://127.0.0.1:15674/ws' failed: Error during WebSocket handshake: Unexpected response code: 404
Further tests with netcat confirm 404:
# netcat -nv 127.0.0.1 15674
127.0.0.1 15674 open
GET /ws HTTP/1.1
Host: 127.0.0.1
HTTP/1.1 404 Not Found
Connection: close
Content-Length: 0
Date: Sat, 23 Jan 2016 20:15:13 GMT
Server: Cowboy
Obviously cowboy does not expose /ws path, so I wonder:
Is it possible to reconfigure cowboy in this situation? How? Is it worth it?
May I use nginx in the place of the cowboy (preferred option)? How?
What other options do I have?
EDIT
RabbitMQ docs are misleading. Correct WebSocket URI:
http://127.0.0.1:15674/stomp/websocket
good job, but:
new WebSocket('http://127.0.0.1:15674/stomp/websocket')
VM98:2 Uncaught DOMException: Failed to construct 'WebSocket': The URL's scheme must be either 'ws' or 'wss'. 'http' is not allowed.(…)(anonymous function) ...
need to use ws/wss-schema:
new WebSocket('ws://127.0.0.1:15674/stomp/websocket')
WebSocket {url: "ws://127.0.0.1:15674/stomp/websocket", readyState: 0, bufferedAmount: 0, onopen: null, onerror: null…}
So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.
I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT