I have a running HTTP web application and I am facing problems to make it run over HTTPS.
I am thinking of bringing some HTTPS Proxy that accepts user requests and forward it to the HTTP web app.
What do you think of that? and How can I accomplish that?
Setting up stunnel is a no-brainer - and its available for Unix/Linux/Posix/MSWindows (you might have mentioned what OS you are using).
(Also you can run the program to encrypt or decrpyt, at the server or at the client side)
It's possible to run Apache Httpd (for example) using HTTPS and use mod_proxy_http as a reverse proxy to forward the requests to your existing HTTP server. Of course, for this to be of any use, you'd need the reverse proxy and the target server to be connected in such a way that connections cannot be sniffed or altered.
You may find that the existing server needs certain extra settings for it to be aware it's using HTTPS (for example, special Valves in Apache Tomcat to set the HTTPS flag to true).
Apache httpd reverse-proxy?
Related
I have an application set up like this:
There is a server, with a reverseproxy/load balancer that acts as the HTTPS termination (this is the one that has a server certificate), and several applications behind it(*)
However, some applications require authentication of the client with a certificate. Authentication cannot happen in the reverse proxy. Will the application be able to see the user certificate, or will it be jettisoned by the HTTPS->HTTP transfer?
(*) OK, so this is a Kubernetes ingress, and containers/pods.
It will be lost. I think you need to extract it in the reverse proxy (i.e. Nginx) and pass it in as a HTTP header if you really must. See for example https://serverfault.com/questions/788895/nginx-reverse-proxy-pass-through-client-certificate. Not very secure as the cert is passed in the clear!
I don't know if we have that level of control over the ingress, personally I'm using a normal Nginx server for incoming traffic instead.
I've recently setup a Crucible instances in AWS connected via a HTTPS ELB. I have a nginx reverse proxy setup on the instance as well to redirect HTTP requests to HTTPS.
This partially works. However Crucible itself doesn't know it's running over HTTPS so serves up mixed content, and ajax queries often break due to HTTP -> HTTPS conflicts.
I've found documentation for installing a certificate in Crucible directly...
https://confluence.atlassian.com/fisheye/fisheye-ssl-configuration-298976938.html
However I'd really rather not have to do it this way. I want to have the HTTPS terminated at the ELB, to make it easier to manage centrally through AWS.
I've also found documentation for using Crucible through a reverse proxy...
https://confluence.atlassian.com/kb/proxying-atlassian-server-applications-with-apache-http-server-mod_proxy_http-806032611.html
However this doesn't specifically deal with HTTPS.
All I really need is a way to ensure that Crucible doesn't serve up content with hard coded internal HTTP references. It needs to either leave off the protocol, or set HTTPS for the links.
Setting up the reverse proxy configuration should help accomplish this. Under Administration >> Global Settings >> Server >> Web Server set the following:
Proxy scheme: https
Proxy host: elb.hostname.com
Proxy port: 443
And restart Crucible.
Making configuration on UI is one way. You can also change config.xml in $FISHEYE_HOME:
<web-server site-url="https://your-public-crucible-url">
<http bind=":8060" proxy-host=“your-public-crucible-url" proxy-port="443" proxy-scheme="https"/>
</web-server>
Make sure to shutdown FishEye/Crucible before making this change.
AFAIK, this configuration is the only way to tell internal Jetty of FishEye/Crucible to be aware of the reversed proxy in front of them.
As part of my job,
I need to intercept the communication between a native windows application to a web server.
My connection to the environment is through an SSL-VPN.
The application (.exe) is installed on my PC and is communicating in HTTPS with the web server over port 1912.
Usually I use Burp proxy in order to intercept the communication between a browser and a server (configuring the proxy through the browser config). Yet,
In this implementation (native windows application) I cannot figure out how to route the traffic to a proxy.
Is there any specific proxy or configuration which I can use in order to that and use Burp (because it is a web proxy.. I need to mess with HTTP requests)?
First thing you have to understand is whether this native application is programmed to use proxy. If it can use proxy, it could obtain proxy information from the Windows system or you might need to configure just for the application inside the application.
Other possibility it to use the default gateway, and redirect requests with HTTP response 3XX to your proxy. It might work depending on your native application. The default gateway might just act as a proxy.
I suppose here you are not talking about reverse proxy and forward proxy caching (https://docs.trafficserver.apache.org/en/4.2.x/admin/reverse-proxy-http-redirects.en.html)
I am not sure how to formulate my question but here we go:
I have 2 servers, one is the nginx reverse proxy and one is the app server.
In my app server, I am developing a simple http client using jerseyclient that will send a request to another server. I can do this now but the traffic goes from the app server and directly to the destination. Is it possible to it from the app server, passes through the reverse proxy server and goes to the destination?
And, is this design ok or is it an abomination?
nginx reverse proxy works only for requests outside your network.
To configure your system works as you described you have to configure firewall NAT or caching HTTP proxy like squid etc.
If you have no reasons why your servers should look as single computer - your configuration is OK.
From what I understand Node.js doesnt need NginX to work as a http server (or a websockets server or any server for that matter), but I keep reading about how to use NginX instead of Node.js internal server and cant find of a good reason to go that way
Here http://developer.yahoo.com/yui/theater/video.php?v=dahl-node Node.js author says that Node.js is still in development and so there may be security issues that NginX simply hides.
On the other hand, in case of a heavy traffic NginX will be able to split the job between many Node.js running servers.
In addition to the previous answers, there’s another practical reason to use nginx in front of Node.js, and that’s simply because you might want to run more than one Node app on your server.
If a Node app is listening on port 80, you are limited to that one app. If nginx is listening on port 80 it can proxy the requests to multiple Node apps running on other ports.
It’s also convenient to delegate TLS/SSL/HTTPS to Nginx. Doing TLS directly in Node is possible, but it’s extra work and error-prone. With Nginx (or another proxy) in front of your app, you don’t have to worry about it and there are tools to help you securely configure it.
But be prepared: nginx don't support http 1.1 while talking to backend so features like keep-alive or websockets won't work if you put node behind the nginx.
UPD: see nginx 1.2.0 - socket.io - HTTP/1.1 - Proxy websocket connections for more up-to-date info.