http redirects to https - networking

What would cause a site to try an go to an https url?
We have sitecore set up to redirect non www URLs to www pre-pended URLs. Example: joesrx.com resolves to www.joesrx.com through the Sitecore URLResolver.
What we are seeing is that if you type joesrx.com, it tries to go to https://joesrx.com before it hits the Sitecore server. Since there are no certificates on this server and https is not utilized we get a 404.
Is there something in IIS that is misconfigured? Proxy teams says it is not in their setting and network team says all of the DNS entries are correct.

As a general rule for debugging these sorts of problems, try to imagine all the elements between you and the application and then use a simple divide and conquer approach. You can also test behavior on individual levels of the path between you and the actual application.
In this case for example (from you to application code):
User
Browser
browser may do caching of redirects. Try a different browser / try incognito mode / clear cache
Browser Extensions/Settings
any extensions which make it so the browser always tries to access website(s) via https? Try with extension disabled / another browser
Proxies/Firewalls
any Proxies/Firewalls on your end which may modify requests? Can you try to access the site bypassing any proxies/firewalls, maybe from a different network connection?
Network
Web Server
Web Server Configuration / Pipelines / Resolvers / Filters / Etc.
.htaccess / IIS config / nginx config / servlet filters / (lots of options depending on your framework). Check the server
Actual application code
well.. check the code.
Example of divide and conquer, choosing the Network mid-point: Try accessing the URL with wget/curl from command-line, curl -i will also show you the headers received from the server. If you find a "Location: .." header it's clear that the server is sending a redirect. So now you only have to check Web Server / framework configuration and actual application code.

There are a few things I would check first:
Do you have rewrite rules in your web.config? They may be pattern-matching on www. and redirecting in order to enforce SSL
Do you have code in your pipelines that is attempting to enforce SSL for specific paths? The code here may not be checking the URL correctly.
In IIS, did you bind the 'www' host name to your IIS site? Or is it falling through to another site that has SSL enforced?

In case the other answers don't help, check for HTST headers such as "Strict-Transport-Security: max-age=31536000".
This HTTP header tells browsers to use only SSL for future requests (among other things).
For more info check out:
https://www.owasp.org/index.php/HTTP_Strict_Transport_Security

Related

Building URLs in Go including server scheme

I am creating a REST API in Go, and I want to build URLs to other resources in my replies.
Based on the http.Response I can get the Host and URL.
However, how would I go about getting the transport scheme used by the server? http or https?
I attemped to check if server.TLSConfig is nil and then assuming it is using http since it says this in the documentation for http.Server:
TLSConfig *tls.Config // optional TLS config, used by ListenAndServeTLS
But it turns out this exists even when I do not run the server with ListenAndServeTLS.
Or is this way of building my URLs the wrong way of doing things? Is there some other normal way of doing this?
My preferred solution when running http and https is just to run a simple listener on :80 that redirects all traffic to https. Then any real traffic can be assumed to be https.
Alternately I believe you can access a request's URL at req.URL.Scheme to see the protocol.
Or do you mean for the entire application? If you accept configuration to switch between http and https, then can't you look at that and see which they chose? I guess I'm missing some context maybe.
It is also common practice for apps to take a baseURL via flag or config to generate external urls with.

Downsides of 'Access-Control-Allow-Origin: *'?

I have a website with a separate subdomain for static files. I found out that I need to set the Access-Control-Allow-Origin header in order for certain AJAX features to work, specifically fonts. I want to be able to access the static subdomain from localhost for testing as well as from the www subdomain. The simple solution seeems to be Access-Control-Allow-Origin: *. My server uses nginx.
What are the main reasons that you might not want to use a wildcard for Access-Control-Allow-Origin in your response header?
You might not want to use a wildcard when e.g.:
Your web and let’s say its AJAX backend API are running on different domains, or just on different ports and you do not want to expose backend API to whole Internet, then you do not send *. For example your web is on http://www.example.com and backend API on http://api.example.com, then the API would respond with Access-Control-Allow-Origin: http://www.example.com.
If the API wants to request cookies from client it must not send Access-Control-Allow-Origin: *, but its value must be the value of the origin from the actual request.
For testing, actually adding entry in /ets/hosts file for 127.0.0.1/server-public-ip dev.mydomain.com is a decent workaround.
Other way can be to have another domain served by nginx itself like dev.mydomain.com pointing to the same/test-instance of backend servers & static-web-root with some security measures like:
satisfy all;
allow <YOUR-CIDR/IP>;
deny all;
Clarification on: Access-Control-Allow-Origin: *
This setting protects the users of your website from being scammed/hijacked while visiting other evil-websites in a modern-browser which respects this policy (all known browsers should do).
This setting does not protect the webservice from scraper scripts to access your static-assets & APIs at rapid speed - doing bruteforce attacks/bulk downloading/causing load etc.
P.S: (1) For development: you can consider using a free, low-footprint private-p2p vpn-like network b/w your development box & server: https://tailscale.com/
In my opinion, is that you could have other websites consuming your API without your explicit permission.
Imagine you have an e-commerce, another website could do all the transactions using their own look and feel but backed by you, for you, in the end, it is good because you will get the money in the end but your brand will lose its "recognition".
Another problem could be if this website would change the sent payload to your backend doing things like changing the delivery address and other things.
The idea behind is just to not authorize unknown websites to consume your API and show its result to users.
You could use the hosts file to map 127.0.0.1 to your domain name, "dev.mydomain.com", as you do not like to use Access-Control-Allow-Origin: *.

IIS server redirect issue behind Cloudfront

I've got an asp.net website (let's say http://cdn.mysite.com) hosted on IIS and sat behind an Amazon CloudFront Distribution (using a CNAME to access the cdn.* url above, let's say the distribution URL is http://mysite.cloudfront.net).
If a user hits a folder/directory url without a trailing slash, the server will issue a redirect to the origin cdn url, so if a user navigates to http://mysite.cloudfront.net/thanks, they'll end up on http://cdn.mysite.com/thanks/ instead of http://mysite.cloudfront.net/thanks/
Any suggestions of how to fix this in asp.net / iis / cloudfront?
You're right - and rather than fighting it - have you configured CloudFront to whitelist host headers?
For each behaviour > Forward Headers > Select 'Whitelist' > Select 'Host' from the list and hit Add.
This setting ensures that the host header (mysite.cloudfront.net) is included in requests back to the origin (so make sure you've added mysite.cloudfront.net to your site bindings). I'd expect the redirect issued by IIS will use the correct domain name once this configuration is in place.

IIS 7 - redirect from HTTPS to HTTP schema not working

I Recently set an aspnet application under win 7 IIS 7 and got enabled SSL for this app.
the app works great under ssl, but when i change the schema from https to http, using a response.redirect, the request get a timeout, i am stuck with it, any idea is welcome.
regards
You cannot switch protocols unless you provide an absolute URL. The reference must be absolute.
make use of the encryptedUri and unencryptedUri attributes. "unencryptedUri" may be specified to send the user back to another domain or specific URI when the module removes security.
You can have a custom configuration..
<secureWebPages
mode="RemoteOnly"
encryptedUri="secure.mysite.com"
unencryptedUri="www.mysite.com"
maintainPath="True"
warningBypassMode="AlwaysBypass">
...
</secureWebPages>
An example would be to redirect secure requests to secure.mysite.com and requests that don't need to be secure could be redirected back to www.mysite.com. maintainPath is used in conjunction with the above attributes. When the module redirects to the encryptedUri or unencryptedUri, it appends the current path before sending users on their way.

Caching with Varnish & Varying over custom-set HTTP headers

I'm developing your standard high traffic ecommerce website and want to setup caching with Varnish. The particular thing on this setup is that the application will return different content depending on the user's particular location.
So my plans are these:
Setup Nginx with GeoIP module, so I can get a X-Country: XX header on all the requests going to the app backends.
Configure the Rails application to always return a "Vary: X-Country" response header.
Put the Varnish server behind the Nginx and the app backends, so it can cache multiple versions of the objects served by Rails, and serve them based on the request headers set by Nginx (not the client browser)
Does anyone have experience with a setup like this? Anything I should be aware of?
If GeoIP lookup is slow, and/or you want to enable people to override the country setting, you could use a country cookie and have the front-end Varnish check for it.
If there is no country cookie, forward the request to your nginx back-end for GeoIP lookup. Nginx serves a redirect with a Set-Cookie: country=us header. If you want to avoid redirects and support cookie-refusing clients/robots, ngingx can forward it to Rails and still try to set the country cookie in the response. Or Varnish can capture the redirect response and do a "restart" with the newly set cookie and go to the back-end
If you have already have a country cookie, use this in your Varnish hash
If Rails can do GeoIP resolving, you don't need Ngingx, except when you use it to serve files...

Resources