SVN over HTTPS: how to hide or encrypt URLs? - encryption

I run Apache over HTTPS and can see in the log file that a HTTP/1.1 request is made for every single file of my repository. And for every single file, the full URL is disclosed.
I need to access my repository from a location where I don't want sysadmins to look over my shoulder and see all these individual URLs. Of course I know they won't see file contents since I am using HTTPS or not HTTP, but I am really annoyed they can see URLs and as a consequence, file names.
Is there a way I can hide or encrypt HTTPS urls with SVN?
This would be great, as I would prefer not having to resort using svn+ssh, which does not easily/readily support path-based authorization, which I am heavily using.

With HTTPS, the full URL is only visible to the client (your svn binary) and the server hosting the repository. In-transit, only the hostname you're connecting to is visible.
You can further protect yourself by using a VPN connection between your client and the server, or tunneling over SSH (not svn+ssh, but an direct ssh tunnel).
If you're concerned about the sysadmin of the box hosting your repository seeing your activity in the Apache logs there, you have issues far beyond what can be solved with software. You could disable logging in Apache, but your sysadmin can switch it back on or use other means.
Last option: if you don't trust the system(s) and/or network you're on, don't engage in activities that you consider sensitive on them. They can't see something that isn't happening in the first place.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

Windows Azure VM SSL and Cloudapp.net

I installed an ASP.net application on a windows Azure VM (IIS 7). SSL certificate is installed, configured and the application works correctly. I have removed Http binding and http endpoints.
The issue I am having is that if I use the cloudapp.net link (using https), the application still opens with a mismatched certificate.
What can I do to deny any user from opening my application using https://xx.cloudapp.net/x?
It seems really silly that people are saying this isn't the right place for this question, since some of the solutions could be code related. ie: In your application, check the host and if it's cloudapp.net, do a URL redirect.
There's a few different options here but it sounds like what you're looking for is just the ability to prevent someone from viewing the application using that URL.
What I would do is set up a site in IIS that uses Host Header resolution to look for xx.cloudapp.net. If that URL is recognized, do a redirect using the HTTP redirect settings to the https version of your app. Don't bind the SSL port to this site or you'll run into SSL errors like you showed above.
The other option is to leave it out entirely and simply use the Host Header resolution to filter out requests for your site. I suspect what you've done is assign all incoming requests to the only IP address on the system, which is why the xx.cloudapp.net is showing your app and the cert is failing.
This would cause xx.cloudapp.net to fail to show any site at all but I think that might be what you want to do anyway.

Things to consider when hosting more than one websites on a server

I have two websites running on IIS 7. Both require SSL. Ports for the websites are http:8080/https:443 and http:8087/https:443 respectively. I've created self signed certificate and put them into the Trusted Root. Contents of the both websites are the same. Here are my questions:
Do I have to make some changes to the hosts file as well? If so, what changes exactly, both on
server and clients
What do I have to type in the address bar in order to be able to open them? (Like 172.16.10.1/website1?) Do I have to specify the port numbers?
For http traffic, you can have many websites which can differ by IP or Port or Host Headers or a combination.
So in your case it is simple. For website1, you have site binding on port 8080, so the url becomes http://172.16.10.1:8080. Ditto for website2: http://172.16.10.1:8087 .
To make things simple, you can do a sitebinding on host header. So, bind the IP 172.16.10.1 with default port 80 to a host header say "www.website1.com" for the first website. Simlary for the other make the same combination bind to "www.website2.com". Now you don't need to specify port in the url. You can simply open both the websites by their respective names.
However, in case of https, it becomes a bit tricky. The certificates are installed on a per server basis. So, you have to specify different IP-Port combinations and host header binding won't work.
One option you have is to use a wildcard certificate which you can then secure-bind to each host-header.
The other option is to get a SAN Certificate (Subject Alternative Name Certificate). This will allow you to do a binding on different host headers with the same IP-port combination.
This excellent article on MSDN will help you understand it better: http://blogs.msdn.com/b/varunm/archive/2013/06/18/bind-multiple-sites-on-same-ip-address-and-port-in-ssl.aspx
Regarding the first part of your question:
You don't need to do anything with the hosts file. If you have a proper third-party certificate, it only needs to be registered on the server. The Intermediate and Trusted roots are already available on the clients. So nothing to be done on the client-side. You can open up "options" in IE and then check "certificates" under the "content" tab to see that a list of publishers is already there.
However, if you are using a self-cert, then the client-part is tricky. Because, the clients will keep on getting the "certificate is invalid" warning every time. One way out of this is to manually install the certificate on each client. Another way is to deploy the certificates to all clients using group policy.

How do I give access to another computer on my network, to my website hosted locally?

We have a local instance of IIS 7 running with a website. Instead of the default "localhost" we have something like, mysite.compname.com. This is a separate entry into IIS 7 and the default website was removed to prevent confusion.
Then in our host file we an entry like this:
127.0.0.1 mysite.compname.com
Now when I try to hit this url, http://127.0.0.1/ApplicationName/Project/AddProject.aspx technically it should work, but instead I get a 404. I can vouch that this isn't a problem with the application, because if I navigate to http://mysite.compname.com/ApplicationName/Project/AddProject.aspx it works fine.
My end goal is to be able to give someone my computer name, so that they can visit a test page, so the url above I think would get turned into this http://computername/ApplicationName/Project/AddProject.aspx. Any help or at least links to understanding would help because I'm not sure where my issue is coming from.
It sounds like the IIS site / application is configured using a Host Header.
This means that the site will only respond if the host header sent by the browser matches the one configured for the site.
This is a standard method to allow one server to host sites for many host and domain names.
If you wish to allow others to view the site on your computer you will need to either have a local DNS server which you can edit, or, probably the easiest option, get them to edit their host files to include
<your IP> mysite.compname.com.
Remember to open the requisite ports (probably only 80, maybe 443 for https) in your firewall.
Or, you can try to edit the site config to remove or modify the Host Header requirement. See the first link for details, but be careful, it's easy to break things if you don't know the entire architecture of the site.

Will direct access to a webdav-mounted file cause problems?

I'm thinking about configuring the remind calendar program so that I can use the same .reminders file from my Ubuntu box at home and from my Windows box at work. What I'm going to try to do is to make the directory on my home machine that contains the file externally visible through webdav on Apache. (Security doesn't really concern me, because my home firewall only forwards ssh, to hit port 80 my my home box, you need to use ssh tunneling.)
Now my understanding is that webdav was designed to arbitrate simultaneous access attempts. My question is whether this is compatible with direct file access from the host machine. That is, I understand that if I have two or more remote webdav clients trying to edit the same file, the webdav protocol is supposed to provide locking, so that only one client can have access, and hence the file will not be corrupted.
My question is whether these protections will also protect against local edits going through the filesystem, rather than through webdav. Should I mount the webdav directory, on the host machine, and direct all local edits through the webdav mount? Or is this unnecessary?
(In this case, with only me accessing the file, it's exceedingly unlikely that I'd get simultaneous edits, but I like to understand how systems are supposed to work ;)
If you're not accessing the files under the WebDAV protocol, you're not honoring locks set via LOCK and UNLOCK methods and therefore will open to potential to overwrite changes made by another client. This situation is described in the WebDAV RFC here: https://www.rfc-editor.org/rfc/rfc4918#section-7.2

Resources