I'm currently working on setting up a reverse proxy for testing a flex-based web application.
The current setup is using mod_proxy (with mod_proxy_http) to reverse proxy to another host. Everything seems to work except for requests made from the flash player, which result in an error message that says "Security error accessing url".
I have a crossdomain.xml set up on the back end system that simply allows everything, using "<allow-access-from domain="*"/>".
The crossdomain.xml is available off of / from both the backend and proxy server.
The odd part is that when I monitor the connection traffic with firebug, the browser is bypassing the proxy and going straight to the backend server to get the crossdomain.xml file.
Does anyone have any suggestions on how I can get the flex behave properly in an environment like this?
I have included my proxy configuration below.
<IfModule mod_proxy.c>
ProxyRequests Off
<Proxy *>
AddDefaultCharset off
Order deny,allow
Allow from all
</Proxy>
# Enable/disable the handling of HTTP/1.1 "Via:" headers.
# ("Full" adds the server version; "Block" removes all outgoing Via: headers)
# Set to one of: Off | On | Full | Block
ProxyVia On
<Location "/">
ProxyPass http://backend:9080/
ProxyPassReverse http://backend:9080/
</Location>
</IfModule>
The problem was actually the result of the endpoints written in the WSDLs that were generated by the web application. They contained the URL of the backend server. I had to turn on the "ProxyPreserveHost" directive to get it to use the proxy's url for the endpoints. That fixed the problem.
The flash player needs to be given the URL to the reverse proxy server, not the flex server.
Related
CentOS 7 running Apache 2.4.6 is acting as the central front facing web server to the Internet. As such it has a few reverse proxy connections setup. They all point to other LAMPs and work great. However I have one IIS server running one of them .Net/ASP websites that just doesn't want to load properly. Using the config below on the Apache, the IIS website loads all of the html & css stuff.
<VirtualHost *:80>
SeverName example.com
ProxyRequests Off
ProxyPreserveHost On
ProxyPass /extDirectory/ http://internalserver/internalDirectory/
<Location /extDirectory/ >
ProxyPassReverse http://internalserver/internalDirectory/
Order allow,deny
Allow from all
</Location>
</VirtualHost>
However it looks like there is a sessionID mishap per this screenshot while accessing site externally:
Compared to the accessing same site internally:
Apache log and what I can gather from IIS's log aren't showing any errors. The only error I am getting is when using IE's buil;t in developer tools and am seeing the "200 Authorization not found" , even though I am logged in successfully.
I wasn't able to fix this using Apache as the ASP.Net developer came through with a fix to their software. This is their reply and solution:
Enterprise creates authorization tokens that are used to authenticate each request coming to the server. Every AJAX request must have a valid authentication token, or it will be rejected. Part of the token is the end user's IP address. If the IP address in the AJAX request is different than the original login request, then the token validation will fail and the AJAX request will be rejected. Enterprise v6.5.2 determines the end user's IP address by looking for three specific HTTP headers in this order: HTTP_X_CLUSTER_CLIENT_IP, HTTP_X_FORWARDED_FOR, REMOTE_ADDR. We think that the proxy server may be sending a different IP address for the AJAX request, which would then cause the token validation to fail and the AJAX request to be rejected.
Open Enterprise's web.config and near the top ADD this line right underneath the element:
<add key="USER_HOST_ADDRESS" value="127.0.0.1" />
Save and close web.config, then restart IIS.
That is it. Turned out to be an ASP.NET issue rather than an Apache.
On my Ubuntu deployment server Nginx is dropping a custom request header (a token), only if the request is coming from Microsoft Edge or Internet Explorer. Requests coming from Firefox, Chrome or Safari just work fine.
I've done a tcpdump to check the difference between the incoming requests, and the requests look exactly the same (only the User-Agent is different, which seems normal). All the browsers are sending the token to nginx
Because my header contains an underscore, I have in nginx.conf the line
underscores_in_headers on;
I am logging the header in access log of nginx,and it shows up for all browsers but IE.
Nginx is proxying to a Python Flask application, using gunicorn. In the Flask application I immediately log the incoming requests and the token is disappeared if the browser is IE. So apparently nginx drops the header before sending it to gunicorn.
Any advice what can cause this.
TLDR: Do you use a WAF? Maybe a WAF as a service?
I'd suggest you investigate your full infrastructure/routing topology. There may be load balancers/things in the path that you're not taking into account.
We literally just ran into this exact same issue at my work and your post was the only thing on the internet that sounded like our problem. We ended up figuring out the root cause.
Here's a simplified version of our topology from a DNS routing perspective:
newwebsite.company.com --> Web Application Firewall as a Service (if this fails it fails open) --> Nginx+ (with WAF plugin) --> Kubernetes Nginx Ingress Controller --> Custom Angular Javascript frontend hosted on Nginx Pod
legacywebsite.company.com --> F5 load balancer --> Windows IIS Web Server.
(There was a section of the new site that used the same backend server of the legacy website, and we'd see hidden 500 errors if we used Chrome Developer Tools.
We checked IIS logs and found out headers with underscores were getting stripped from the clients HTTP request b4 they'd get to the backend IIS server/we found out that we had to add underscores_in_headers on; to every Nginx Load Balancer in the path and that fixed it! ... or so we thought. It turned out the problem was fixed for every browser except for Internet Explorer / Microsoft Edge. (Your exact scenario)
The crazy thing is if you were on the one url path of the new site that would forward your traffic to the old site's load balancers, then you were going through a crazy amount of load balancers. (The nginx pod that hosted the Angular Javascript frontend would redirect you to the F5 load balancer). We discovered the root cause by process of elimination to get rid of that crazy amount of load balancers in the routing in a way that involved minimal testing. I edited my hostfile for newwebsite.company.com to bypass the WAF as a Service and point straight to the Nginx+ LB acting as a WAF, and it started working/no more 500 errors for IE/Edge.
Our theory is that our WAF as a Service was stripping out a HTTP header that has an underscore (which Win IIS web servers use), and they were only stripping out this HTTP header for Edge/IE. So we've got a ticket with them explaining the situation and directions for reproduceability.
I'm trying to get the following done:
A HTTP request comes into an address subdomain.domain.com to a public ip on a machine running a proxy (maybe apache? Anything better?)
Based on the subdomain, I'd like the request to be redirected to an internal machine on a private ip, and specific port. The response for that request will come from that internal machine.
What are my options? Any general guidelines out there for achieving this? Whats a good proxy implementation choice? Will also need to dynamically add subdomains over time, which redirect to specific internal ips/ports.
How do ssl certificates work in a setup with subdomains? Is a separate certificate required for every subdomain?
The setup isn't too hard. You just make a virtual host for each subdomain and configure the vhosts as proxies. The approach is the same regardless of which proxy software you choose. I recommend you to use Nginx as an reverse proxy since the configuration is easier and the performance is much better than Apache. If you still want to use Apache, make sure you do not run PHP on the proxy machine and use mpm_worker instead of mpm_prefork.
You can make a script which adds new subdomains to the configuration file. It shouldn't be too hard since they will look almost the same, except for the path to the SSL certificate and the IP of the backbone server.
For SSL you can use a wildcard certificate which will cover the entire domain, including subdomains. This is not supported on all platforms, but the support have grown in the last years so it should be pretty safe.
Otherwise, without a wildcard certificate, you will need a certificate and a separate IP address per subdomain (since the SSL connection is set up before the domain name is known, you will need to differentiate different certificates by different IPs).
Apache is perfectly reasonable for this problem. You can do virtual hosts which use mod_proxy:
<VirtualHost *:80>
ServerAdmin xxx#yyy.com
ServerName foo.yyy.com
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyErrorOverride On
ProxyPass / http://192.168.1.1/
ProxyPassReverse / http://192.168.1.1/
<Location />
Order allow,deny
Allow from all
</Location>
</VirtualHost>
If you were looking to host hundreds or thousands of sub-domains you could actually do this with mod_rewrite instead, with a trick involving local name lookups that allowed you to proxy bar.yyy.com to something like local.bar.yyy.com. Using mod_rewrite for mass virtual hosting is mentioned in the apache docs, using it to proxy instead of just rewrite is relatively straightforward. Doing it that way has the advantage that new sub domains can be added purely using DNS though.
In terms of SSL if you are just using *.yyy.com as the subdomains you can use a wildcard certificate (I neither recommend nor disapprove of thwate, they just had a reasonable description of it). In the general case though hosting multiple SSL sites behind a single public IP address is a bit more tricky.
I'm thinking of a web app that uses CouchDB extensively, to the point where there would be great gains from serving with the native erlang HTTP API as much as possible.
Can you configure Apache as a reverse proxy to allow outside GETs to be proxied directly to CouchDB, whereas PUT/POST are sent to the application internal logic (for sanitation, authentication...)? Or is this unwise -- the CouchDB built-in authentication options just seem a little weak for a Web App.
Thanks
You can use mod_rewrite to selectively proxy requests based on the HTTP method.
For example:
# Send all GET and HEAD requests to CouchDB
RewriteCond %{REQUEST_METHOD} GET|HEAD
RewriteRule /db/(.*) http://localhost:5984/mydb/_design/myapp/$1 [P]
# Correct all outgoing Location headers
ProxyPassReverse /db/ http://localhost:5984/mydb/_design/myapp/
Any POST, PUT, or DELETE requests will be handled by Apache as usual, so you can wire up your application tier however you usually would.
Your question is aging without answers, so I'll add this "almost answer".
Nginx can definitely redirect differently based on requests.
This is, if you are ready to place nginx in the front as the revproxy and place apache and couchdb both as backends.
Did you see this? OAuth and cookie authentication were checked in on the 4th:
http://github.com/halorgium/couchdb/commit/335af7d2a9ce986f0fafa4ddac7fc1a9d43a8678
Also, if you're at all interested in using Erlang as the server language, you could proxy couchdb through webmachine:
http://blog.beerriot.com/2009/05/18/couchdb-proxy-webmachine-resource/
I would consider using the reverse proxy feature of Apache mod_proxy. Create a virtual host configuraton that forwards certain HTTP requests of the web server to CouchDB. You can setup rules on which URI paths that should be forwarded etc.
See this guide for inspiration: http://macgyverdev.blogspot.se/2014/02/apache-web-server-as-reverse-proxy-and.html
I have 2 servers. One Reverse proxy on the web and one on a private link serving WebDAV.
Booth servers are apache httpd v2.
On the proxy I have:
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /repo/ http : //share.local/repo/
ProxyPassReverse /repo/ http : //share.local/repo/
On the dav server I have:
<Location /repo/>
DAV on
Order allow,deny
allow from all
</Location>
The reverse proxy is accessed via https and the private server is accessed via http.
And there lies the problem!
Read only commands work fine. But when I want to move something I get 502 Bad gateway.
The reason for this is the reverse proxy not rewriting the url's inside the extended dav request.
The source URL is inside the header and is correctly transformed to http://share.local/file1.
The destination URL is inside some xml fragment I do not understand and stays https://example.com/file1 :(
Is there a standard way to let the apache correctly transform the request?
Thanks for your effort.
Hmm, found the answer. Always the same :)
I added the next line to my 'private server' config file:
LoadModule headers_module /usr/lib/apache2/modules/mod_headers.so
RequestHeader edit Destination ^https http early
(e.g. of config location '/etc/httpd/conf.d/DefaultRequestHeader.conf')
and it worked. I don't know if this has drawbacks. I'll see.
The destination URL shouldn't be in XML but in the "Destination" header, as you already noticed. Maybe you were looking at the error response...
In general, this problem would go away when clients and servers implement WebDAV level 3 (as defined in RFC4918), which allows the Destination header to be just a relative path.