Getting squid to cache a http request that needs to be redirected to a https request - squid

I have an odd situation. I've got code that requests data from http://machine.com/datafile. machine.com redirects to an https address, but otherwise leaves things the same.
I'd like squid to cache these requests. I'd need to setup squid (or something) so that when it sees a request to http://machine.com/datafile it actually gets the data from https://machine.com/datafile (transparently, without a local redirect).
Is a configuration like this possible?

Related

If a domain is secured with HTTPS, does it become less secure if the URL is typed using HTTP?

For example, does the URL http://www.google.com/ make the URL less safe, even though the default for this domain is with https?
Accessing a site using http:// (thus the unprotected unecrypted HTTP protocol) means that at least one unprotected HTTP request is sent to the server (most sites that support https will automatically redirect you to the https version).
This unprotected request can be intercepted by an attacker and thus send you arbitrary data back (malicious JavaScript code, redirect to other sites and so on).
The only exception is if you type http://www.google.com/ and you are using Chrome browser because Chrome will for addresses on google.com automatically change the entered URL to https:// before anything is sent on the network.

HTTP on a HTTPS Website

I was just wondering this small little question. I know it is irreverent to coding, but I just had to know quickly.
If you type in http:// for a https:// will it still take you to the correct place?
That is mostly dependent on the server configuration. The server has to accept the initial HTTP request and be configured to redirect the client to an appropriate HTTPS url.
That being said, there are some Internet standards related to automating HTTP-to-HTTPS upgrades. HTTP Strict Transport Security and Upgrade Insecure Requests allow an HTTP/S server to tell clients that it wants them to automatically use HTTPS for all subsequent requests. If a client visits an HSTS/UIR-enabled server, it will receive a normal HTTP response with additional HSTS/UIR-related headers. If the client supports HSTS/UIR, it will then know to automatically send all subsequent HTTP requests to that same server using HTTPS, and in the case of UIR also treat any received HTTP URLs as if they were HTTPS URLs.

HSTS bypass with sslstrip+ & dns2proxy

I am trying to understand how to bypass HSTS protection. I've read about tools by LeonardoNve ( https://github.com/LeonardoNve/sslstrip2 and https://github.com/LeonardoNve/dns2proxy ). But I quite don't get it.
If the client is requesting for the first time the server, it will work anytime, because sslstrip will simply strip the Strict-Transport-Security: header field. So we're back in the old case with the original sslstrip.
If not ... ? What happens ? The client know it should only interact with the server using HTTPS, so it will automatically try to connect to the server with HTTPS, no ? In that case, MitM is useless ... ><
Looking at the code, I kinda get that sslstrip2 will change the domain name of the ressources needed by the client, so the client will not have to use HSTS since these ressources are not on the same domain (is that true?). The client will send a DNS request that the dns2proxy tool will intercept and sends back the IP address for the real domain name.At the end, the client will just HTTP the ressources it should have done in a HTTPS manner.
Example : From the server response, the client will have to download mail.google.com. The attacker change that to gmail.google.com, so it's not the same (sub) domain. Then client will DNS request for this domain, the dns2proxy will answer with the real IP of mail.google.com. The client will then simply ask this ressource over HTTP.
What I don't get is before that... How the attacker can html-strip while the connection should be HTTPS from the client to server ... ?
A piece is missing ... :s
Thank you
Ok after watching the video, I get a better understanding of the scope of action possible by the dns2proxy tool.
From what I understood :
Most of the users will get on a HTTPS page either by clicking a link, or by redirection. If the user directly fetch the HTTPS version, the attack fails because we are unable to decrypt the traffic without the server certificate.
In the case of a redirection or link with sslstrip+ + dns2proxy enabled with us being in the middle of the connection .. mitm ! ==>
The user goes on google.com
the attacker intercept the traffic from the server to the client and change the link to sign in from "https://account.google.com" to "http://compte.google.com".
The user browser will make a DNS request to "compte.google.com".
the attacker intercept the request, make a real DNS request to the real name "account.google.com" and sends back the response "fake domain name + real IP" back to the user.
When the browser receives the DNS answer, it will search if this domain should be accessed by HTTPS. By checking a Preloaded HSTS list of domains, or by seeing the domain already visited that are in the cache or for the session, dunno. Since the domain is not real, the browser will simply make a HTTP connection the REAL address ip.
==> HTTP traffic at the end ;)
So the real limitations still is that the need for indirect HTTPS links for this to work. Sometime browser directly "re-type" the url entered to an HTTPS link.
Cheers !

Can I forge the HTTP HOST-header param in order to fake a request to a non-mapped subdomain?

Scenario: I want a staging environment at a customer's site. The customer owns www.example.com. I want to map the site to staging.example.com reachable from the outside, but I haven't got time to wait for the bureaucracy surrounding either the purchase of the new subdomain or opening of secondary HTTP ports.
Assumption: If I spoof the HTTP Header param Host to be staging.example.com on the client side, but actually make the request to the IP of www.example.com, IIS will redirect the request to the configured site for staging.example.com. Am I right?
So is there any client tool that can help me with that? I'm fairly famailiar with Fiddler, but it seem to override my rewrites of the host parameter. Also I would need to configure it to do it for every request, not just one, to make it trivial to test.
Are there simpler solutions to this problem?
I'm not entirely sure what you're asking.
Inside Fiddler, by clicking Tools > HOSTS and you can send all traffic targeting one site, e.g. dev.example.com to the IP of your choice. The target site (namely dev.example.com) doesn't need to exist at all in this case. Your client (e.g. the browser) has no idea that Fiddler is retargeting the traffic, it just thinks that it is talking to dev.example.com.
If you have the Fiddler book, check out the Retargeting Traffic section for many other ways to retarget traffic.

Removing http301 redirect from client's cache

I have a server/client architecture where the client hits the ASP.NET server's service at a certain host name, IP address, and port. Without thinking, I logged on to the server and set up permanent HTTP301 redirection through IIS from that service to another URL that the machine handles via IIS (same IP and port), mistakenly thinking it was another site that is hosted there. When the client hit the server at the old host name, it cached the permanent redirect. Now, even though I have removed the redirection, the client no longer uses the old address. How can I clear the client's cache so that it no longer stores the redirect?
I have read about how permanent HTTP301 can be, but in this case, it should be possible to reset a single client's knowledge of the incorrectly-learned host name. Any ideas?
The HTTP status code 301 is unambiguously defined in RFC 2616 as
any future references to this
resource SHOULD use one of the
returned URIs
which means you have to go ask all your clients to revalidate the resource. If you have a system where you can push updates to your clients, perhaps you can push an update to use the same URI again, but force a revalidation.
Nothing you do on the server side will help - in fact, by removing the permanent redirect in IIS you have already taken all measures you should.

Resources