Can I send carbon/graphite data to a url on port 80? - nginx

I work in an environment where ports are closed and difficult to get open.
I was thinking of sending my graphite data to localhost/graphitein rather than port 2003 and have nginx do the mapping.
Is there any reason not to do this?

According to the Wikipedia (bold is mine):
Nginx [...] can also act as a reverse proxy server for HTTP, HTTPS,
SMTP, POP3, and IMAP protocols, as well as a load balancer and an
HTTP cache.
I don't think you can use Nginx as a reverse proxy for the Graphite protocol.
Perhaps you can use an SSH Tunnel?

This tool could be a solution https://github.com/obfuscurity/backstop "Backstop is a simple endpoint for submitting metrics to Graphite. It accepts JSON data via HTTP POST and proxies the data to one or more Carbon/Graphite listeners."

Related

Rethinkdb connection with cloudflare

I am running Rethinkdb on a server that lays behind cloudflare. I cannot make connection with my server when using my hostname. (I can access other stuff on my server so i am almost sure the problem doesn't lies there)
I am also able to connect using my ip directly (without adding a cert to the .connect function of rethinkdb client)
I do use Nginx and my client is in java
What i have tried:
Using custom set ports (that were said to be open on cloudflare)
trying proxying location to certain port
Using a cert (Rethink client side)
I couldn't find any information about Rethinkdb behind CloudFlare, so i am open to any suggestions
If i need to post more information please ask, i'm not sure what i should share...
This is not a RethinkDB issue. Cloudflare only supports specific ports, as spelled out here:
https://support.cloudflare.com/hc/en-us/articles/200169156-Which-ports-will-Cloudflare-work-with-
80
8080
8880
2052
2082
2086
2095
And all traffic must be HTTP sessions. RethinkDB communicates via raw TCP sockets. You cannot run it behind Cloudflare.

Do HTTPS connections require HTTPS proxies or can I use HTTP proxies?

The question is about HTTP vs HTTPS.
If I want to anonymously load a website that forces HTTPS, like Google.com, do I need an HTTPS proxies, or can I get away with HTTP proxies?
If your proxy is SOCKS it will not care what kind of socket is connecting through it. It has its own handshake and it does not care about what happens after the handshake. Whether after the SOCKS handshake an SSL handshake (HTTPS) is started it is not a SOCKS proxy problem, it will just pass through.
Several HTTP proxies on the other hand expect HTTP headers to guide them, such a HTTP proxy will not allow HTTPS since it needs to read the headers.
On the third hand (ekhm... well, foot?), an HTTP proxy that supports HTTP CONNECT can also setup the transfer of arbitrary data. Therefore such a proxy can setup any type of socket, which can have an SSL handshake, which can then be used for HTTPS transfer.
HTTP Proxy Server supports CONNECT verb which supports HTTPS connections within HTTP Proxy. You don't need special HTTPS proxy server or any other setup.
CONNECT verb allows you to create binary socket tunnel to any given IP:Port address. So any HTTP client (all browsers), will open secure tunnel and communicate securely over proxy server. However, no one cant control or see anything that is going through the tunnel unless they implement man in middle attack by sending you self-signed certificates.
Most firewall these days automatically implement man in middle self signed certificates that are deployed in work network, so you have to probably dig more to identify whether it is really secure or not. So it may not be that anonymous.
If you're trying to access a service anonymously, you won't get this by running your own proxy. It's not clear from the original question what is meant by "proxy", e.g. local service, or remote service. You won't get anonymity by surfing through a proxy that's on your network, unless it's something like a TOR proxy which relays out through the TOR network.
As for whether proxies can support HTTPS or not, that's been covered here, it would be unusual to find a proxy that doesn't support CONNECT. However if it's a remote anonymizing service you're using, I doubt they would do MitM, since you'd need to install the signing cert into your trusted root store, so they couldn't do that surreptitiously.

Nginx catch "broken header" when listening to proxy_protocol

I need to use http health checks on a Elastic Beanstalk application, with proxy protocol turned on. That is currently not possible, and the health check fails with a an error --> *58 broken header while reading PROXY protocol
I figured I have two options
Perform the health check on another port, and setup nginx to listen to http requests on that port and proxy to my app.
If it is possible to catch the broken header errors, or detect regular http requests in the proxy_protocol server block, then redirect those requests to a port that listens to http.
I would prefer the latter(#2), if possible. So is there any way to do this?
Ideally, I would prefer not to have to do any of this. A feature request to fix this has been submitted to AWS, but it has no ETA.
The proxy protocol specification says:
The receiver MUST be configured to only receive the protocol described in this
specification and MUST not try to guess whether the protocol header is present
or not. This means that the protocol explicitly prevents port sharing between
public and private access. Otherwise it would open a major security breach by
allowing untrusted parties to spoof their connection addresses.
I think this means that option 2 is a sufficiently bad idea that it's not even supported by conforming implementations of the proxy protocol.
Option 1, on the other hand, seems pretty reasonable. You can set up a security group so that only legitimate health checks can come in on the port without proxy protocol enabled.
Another couple of options spring to mind too:
Simply point your health checks at the thing that's adding the header (i.e. ELB?), rather than directly at your Nginx instance. Not sure if this is possible with Elastic Beanstalk, it's not a service I use.
Use something else to add the proxy protocol header before forwarding the health-check traffic on to your Nginx, which would avoid having to duplicate your Nginx config. For instance a HAProxy running on the same machine as your Nginx could do this. Again, use security groups to ensure that only legitimate traffic gets through.

How does Proxifier resolve hostnames through proxy?

What technique is Proxifier using to resolve hostnames through proxy? All other solutions I've found on the Internet offers DNS through socks just like what Badvpn/Tun2Socks does. But Proxifier can work even through a http proxy and the only thing you need is that your proxy server support DNS (Squid for example). Their explanation is very brief saying "Proxifier has to assign placeholder (fake) IP addresses". But what that mean exactly?
Note: As you know DNS queries are UDP by default and can't be forwarded through http proxy naturally. Browsers are another examples which do name resolution through proxy when are set to use it.
Assigning fake IP in response to DNS request is different from returning truncated DNS answer.
Here's the related RFC about assigning fake IPs in response to DNS request in order to get the domain name and pass it to the remote proxy server afterward: https://www.rfc-editor.org/rfc/rfc3089
I found the answer. Redsocks has implemented this indeed, as it says:
Redsocks includes `dnstc' that is fake and really dumb DNS server that
returns "truncated answer" to every query via UDP. RFC-compliant
resolver should repeat same query via TCP in this case - so the
request can be redirected using usual redsocks facilities.

split HTTP and TCP-only (non-HTTP) traffic

I have web application that runs on Tomcat (and gets HTTP requests) and some other backend standalone application that gets only TCP. For some reasons, I can use outside only port 8080. So, I need to get all TCP requests (from outside) to port 8080 and forward HTTP ones to web application on Tomcat and all TCP pure requests (that are not HTTP) - to standalone application. Internal forwarding could be done to any port, e.g. 8181 on Tomcat and 8282 on standalone application. Is it possible to setup such configuration? How it could be done?
Thanks in advance.
TCP and HTTP are protocols in different networking stack layer. If you want to use some application to filter HTTP requests, your application should deal with Application-Layer information, not Network-Layer(like TCP/UDP).
I don't see how this can be possible generally. You could look packet-by-packet, but the middle of an http body can be arbitary so you can't just look at the data of each packet
If any particular client will send you either http or general TCP but not both, can you do this by source-IP address? Do you know the addresses of either the servers that will send you http requests or the ones that will send you TCP requests?
If you don't know the source IPs, you could heuristically look at the first packet from some previously unknown IP and see if it looks like http, then tag that address as containing http traffic.
What is the content/format ot the TCP communication? Is there any pattern you can detect in that?
Y
Perhaps you could do something like this using iptables + L7 filter. Of course this will only work if you run Linux on your box. Also I don't know how recently l7 filter project has been updated.
Java servlet technology is not limited to Http. The servlet interface lets you read in the incoming input stream via ServletRequest.getInputStream(). So you can create an implementation of Servlet interface and map it in web.xml and you are all set to receive any TCP traffic.
Once you have the read the input stream to sniff the content you will want to forward HTTP requests to an HttpServlet. To do this you will need to make sure that the input stream you pass on is positioned at the very beginning of the input.
EDIT: On reading your question once again, I noticed that you don't plan to expose the Tomcat directly on external port as I originally thought. If you are willing to make the tomcat listen on the external port, you can try the approach described above

Resources