Tomcat http is not redirecting to https - http

I have two instances of Tomcat set up on two different servers. I didn't explicitly select which versions to install, they were actually both automatically installed as a part of IBM Rational Team Concert installations (v5.0.1 and v5.0.2 on each server), but I can say they are both at least version 7.
On the first instance, when I go to http://myserver.domain.com:9443/ccm, I get automatically redirected to https://myserver.domain.com:9443/ccm.
On the second instance, when I go to http://otherserver.domain.com:9443/ccm, I do not get redirected to https, but rather either get a strange download or get a blank page with an unrecognizable Unicode character (depending on the browser).
I notice that both server.xml's are different (I am not sure why RTC made them that different between minor releases), but it is not obvious by looking at them both what I have to set in the second server.xml to achieve the behavior present in the first. Port 9443 is set up as an HTTPS port. What do I set in server.xml to make all http requests to that port automatically redirect to https?

Tomcat can't do what you are asking. There is no mechanism to detect that http is being used on an https port and redirect the user accordingly. This might be something we add in Tomcat 9 but that is very much just an idea at this stage.
Something other than Tomcat is performing the redirect you observe. Take a look at the HTTP headers - they might provide some clue as to what is going on.

Related

SVN over HTTPS: how to hide or encrypt URLs?

I run Apache over HTTPS and can see in the log file that a HTTP/1.1 request is made for every single file of my repository. And for every single file, the full URL is disclosed.
I need to access my repository from a location where I don't want sysadmins to look over my shoulder and see all these individual URLs. Of course I know they won't see file contents since I am using HTTPS or not HTTP, but I am really annoyed they can see URLs and as a consequence, file names.
Is there a way I can hide or encrypt HTTPS urls with SVN?
This would be great, as I would prefer not having to resort using svn+ssh, which does not easily/readily support path-based authorization, which I am heavily using.
With HTTPS, the full URL is only visible to the client (your svn binary) and the server hosting the repository. In-transit, only the hostname you're connecting to is visible.
You can further protect yourself by using a VPN connection between your client and the server, or tunneling over SSH (not svn+ssh, but an direct ssh tunnel).
If you're concerned about the sysadmin of the box hosting your repository seeing your activity in the Apache logs there, you have issues far beyond what can be solved with software. You could disable logging in Apache, but your sysadmin can switch it back on or use other means.
Last option: if you don't trust the system(s) and/or network you're on, don't engage in activities that you consider sensitive on them. They can't see something that isn't happening in the first place.

block bad server request while using lowest server overhead

Our web server is occasionally getting slammed with loads of requests for an (exchange server) file that doesn't exist on our (apache) web server:
autodiscover/autodiscover.xml
How can I respond to those requests in a way that requires the least load on our server?
(When these requests happen, our VPS memory usage spikes and goes over our RAM allocation.)
I want the server to respond, with the lowest overhead, telling them basically to go away; that file doesn't exist here; stop; bad request (you get the idea.)
Right now, the server returns a 404 error page which means that our WordPress installation is evoked. It returns our custom 404 WordPress error page. That involves a lot of overhead that I'd like to avoid.
I suspect that these requests come from some sort of hacking attempts, but I'm just guessing at that. At any rate, I just want to intercept them and block or stop them as quickly and efficiently as possible.
(I've put IP blocks on the IP addresses they come from but I think that is just playing whack-a-mole.)
I've put this in our htaccess file but it doesn't do what I want:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule ^autodiscover/autodiscover.xml$ "-" [forbidden,last]
</IfModule>
Is there something wrong with this rule?
Can I or should I use our htaccess file in another way to do what I want? Is there a better way to do it than using htaccess? Could we be returning something other than a 404 response? Perhaps a 400 or 403 response? How would we do that? We are on a VPS server to which I do not have direct access.
"No direct access" means you can't install software?
I'd totally recommend setting up nginx as a reverse proxy, but if that's out of the question because of no local admin: use a service like CloudFlare and use page rules (which runs the reverse proxy for you, and more).
You don't want anything unnecessary to reach apache if you run pre-fork mpm (which you probably do because of mod_php), as it will spawn lots of processes to handle parallel clients, and it is relatively slow in starting them. Additionally, once the storm is over, they'll be killed again (depending on your setting for MaxSpareServers).
Regarding MaxSpareServers, and a way to at least contain the damage this does:
IF (you might not be) you run apache's mpm_prefork (MPM = Multi-Processing Modules, basically what "engine" apache uses; most PHP-setups with mod_php run mpm_prefork), apache needs a child to handle each parallel request.
Now, when 100 requests come in at the same time, apache will spawn 100 (or less, if MaxRequestWorkers is set below 100), childs to handle them. That's VERY costly, because each of those processed needs to start and will take up memory) and can result in a Denial-Of-Server for all intents and purposes as your VPS starts to swap and everything gets slowed down.
Once this request storm is over, apache is left with 100 child processes, and will start to remove some of them until it reaches whatever has been set as MaxSpareServers. If there are no spare servers, new requests will have to wait in line for a child to handle them. To users, that's basically "the server doesn't respond".
If you can't put something in front of apache, can you change (or have somebody change) the apache config? You will have some success if you limit MaxRequestWorkers, aka the maximum number of childs apache can spawn (and therefore the maximum number of parallel requests that can be handled). If you do, the server will still appear unresponsive in high load times, but it won't affect the system as much, as your processes won't start to use SWAP instead of RAM and become terribly slow.
Any attempt to Deny access to the file, whether via Deny or an [F] forbidden flag, results in WordPress kicking in and responding with a 404 error page.
To avoid this, I have created a custom 421 error page (which seemed most appropriate), placed it outside of the WordPress directory, and used an htaccess Redirect from the bad-request file name to the custom error page.
Redirect /autodiscover/autodiscover.xml /path/to/error-421-autodiscover-xml.php
This returns the custom error page (only 1KB) without WordPress being invoked. Creating a custom error page specific to the one file with the bad-request file in the name of the error code file should help to identify these calls in our log files.
If you ran an Exchange server elsewhere for email, then it might be appropriate to redirect from the web server to the Exhange server using htacess on the web server. An appropriate htaccess rule might be:
RedirectMatch 302 (?i)^/autodiscover/(.*)$ https://mail.example.com/autodiscover/$1
See: https://glennmatthys.wordpress.com/2013/10/14/redirect-autodiscover-urls-using-htaccess/
#janh is certainly right that a mechanism to intercept the request before it even hits our Apache server would be ideal, but it is beyond what we can do right now.
(The /autodiscover/autodiscover.xml seems to be a file used by Exchange servers to route traffic to the mail server. According to our web hosting company, it is not unusual for Apache servers to be hit with requests for this file. Some of these requests may come from malicious bots searching for access to a mail server; some may come from misconfigured smart phone email clients or other misdirected requests intended for an Exchange server.)

Things to consider when hosting more than one websites on a server

I have two websites running on IIS 7. Both require SSL. Ports for the websites are http:8080/https:443 and http:8087/https:443 respectively. I've created self signed certificate and put them into the Trusted Root. Contents of the both websites are the same. Here are my questions:
Do I have to make some changes to the hosts file as well? If so, what changes exactly, both on
server and clients
What do I have to type in the address bar in order to be able to open them? (Like 172.16.10.1/website1?) Do I have to specify the port numbers?
For http traffic, you can have many websites which can differ by IP or Port or Host Headers or a combination.
So in your case it is simple. For website1, you have site binding on port 8080, so the url becomes http://172.16.10.1:8080. Ditto for website2: http://172.16.10.1:8087 .
To make things simple, you can do a sitebinding on host header. So, bind the IP 172.16.10.1 with default port 80 to a host header say "www.website1.com" for the first website. Simlary for the other make the same combination bind to "www.website2.com". Now you don't need to specify port in the url. You can simply open both the websites by their respective names.
However, in case of https, it becomes a bit tricky. The certificates are installed on a per server basis. So, you have to specify different IP-Port combinations and host header binding won't work.
One option you have is to use a wildcard certificate which you can then secure-bind to each host-header.
The other option is to get a SAN Certificate (Subject Alternative Name Certificate). This will allow you to do a binding on different host headers with the same IP-port combination.
This excellent article on MSDN will help you understand it better: http://blogs.msdn.com/b/varunm/archive/2013/06/18/bind-multiple-sites-on-same-ip-address-and-port-in-ssl.aspx
Regarding the first part of your question:
You don't need to do anything with the hosts file. If you have a proper third-party certificate, it only needs to be registered on the server. The Intermediate and Trusted roots are already available on the clients. So nothing to be done on the client-side. You can open up "options" in IE and then check "certificates" under the "content" tab to see that a list of publishers is already there.
However, if you are using a self-cert, then the client-part is tricky. Because, the clients will keep on getting the "certificate is invalid" warning every time. One way out of this is to manually install the certificate on each client. Another way is to deploy the certificates to all clients using group policy.

Setup a maintenance URL in a generic way

Suppose I have a website that is normally accessed at address www.mywebsite.com.
Now let's say the website is down completely (think server has melted). I want the users trying to reach www.mywebsite.com to end up on a maintenance URL on another server instead of having a 404.
Is this possible easily without having to route all the trafic through a dispatcher/load balancer?
I could imagine something like :
When the default server is UP traffic is like :
[USER]<---->[www.mywebsite.com]<---->[DISPATCHER]<---->[DEFAULT SERVER]
When the default server is DOWN traffic is like :
[USER]<---->[www.mywebsite.com]<---->[DISPATCHER]<---->[MAINTENANCE SERVER]
Where [DISPATCHER] figures out where to route the traffic. Problem is that in this scenario all the traffic goes through [DISPATCHER]. Can I make it so that the first connection goes through dispatcher, and then, if the default server is up, the traffic goes directly from the user to the default server? (with a check every 10 - 15 minutes for example)
[USER]<---->[www.mywebsite.com]<-------->[DEFAULT SERVER] after the first successful connection
Thanks in advance!
Unfortunately, maybe the most practical solution is to give-up. Until browsers finally add support for SRV records....
You can achieve what you want with dynamic DNS - setup some monitoring script on a "maintenance server" that would check if your website is down, and if yes, update DNS for your site and point it to the maintenance server. This approach have it's own problems, biggest of which is that any monitoring may generate false positives, and thus your users will see the maintenance page while the site is actually up.
Another possible approach (even worse) - for example, make www.example.com point to your dispatcher server, and www2.example.com - to your main server. Then dispatcher would HTTP redirect all incoming requests to www2.example.com.
But what will you do when your dispatcher melts ? - While trying to handle one point of failure you just added another one.
Maybe it's practical to handle all page links in some javascript what will check if the server is up first, and only then follow the link. This approach while requires some scripting, but at least provides best results when your server is down while the user is already on your site. But it helps nothing for those who ry to enter the site for the first time.
If only browsers would support SRV records....

Creating a HTTP handler for IIS that transparently forwards request to different port?

I have a public web server with the following software installed:
IIS7 on port 80
Subversion over apache on port 81
TeamCity over apache on port 82
Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible.
However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering...
Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee?
In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want.
If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler.
Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.
Try the Application Request Routing addon for IIS to configure IIS as a reverse proxy.

Resources