Browser Integrity Check - NGINX - http

So I have made a NGINX HTTP Reverse Proxy, and want to have a js browser integrity check on it.
Like the following:
http://prntscr.com/a1rnve
http://prntscr.com/a1rnyf
Can someone direct me on how to go about this?
Been trying for hours and can't do it.

In that case, the website you're trying to access to is using CloudFlare anti dDos protection. I would recommend it, read more here :)

Related

How to create a HTTPS->HTTPS subdirectory redirect using subdomains?

I am currently having issues with setting up an HTTPS domain redirect. I have a DNS URL redirect entry that points a few sub-domains to same-server URLs. For example:
docs.kipper-lang.org -> kipper-lang.org/docs/
play.kipper-lang.org -> kipper-lang.org/playground
The issue I am currently experiencing is that when using the subdomains, it mostly works, but it can only use HTTP. If I attempt to use HTTPS (like for example https://docs.kipper-lang.org) the redirect won't work and will get stuck apparently waiting for the HTTPS certificate (I think, but I don't know for sure, since it loads forever and gets a time-out).
So my DNS provider does its job for the most part as I want, but I am not sure how to add the HTTPS encryption to these redirects. Is there maybe even some DNS configuration or even middle-man service for redirects I can use, where these HTTPS encryptions are built-In? Since receiving a "Warning: Insecure connection" every time someone uses the sub-domains is a massive problem for me.
Note though that considering I am hosting on a GitHub Pages server, I am unable to do these redirects on the server side myself, as I can't use any code in this case.
I would greatly appreciate any ideas for fixing this or what I could use to achieve this another way.
Thanks in advance!

Change URL view by client with HAPROXY

I am loadbalancing to a server and i dont want the url to change. The client has to see the same url that he typed.
For example if the client enters : test.domain.com
The haproxy balances between the backends and the client sees:
https://website.com/blabla/blablalogin.htmx
And i want the client to see only:
test.domain.com or even https://test.domain.com
Is it possible with HAProxy to rewrite an URL ?
I have been searching and i don't know how to do it !
Thanks a lot !!
Assuming you have a working haproxy, the problem may be in the website itself. Make sure that the website uses relative URLs instead of fixed ones. Example: Login link: /blablah/blablalogin.htmx instead of https://website.com/blabla/blablalogin.htmx.

How to change Server Name in HTTP Response Header?

Is it possible to change your server name in HTTP Response Headers from nginx to something else. I want to do it to confuse prying eyes and enhanced security.
You will need to go into the core code, find where this is, change this and recompile Nginx.
Not worth the trouble really.
There is the server tokens directive that will hide the version number. http://wiki.nginx.org/HttpCoreModule#server_tokens.
Not much use in terms of security either really but at least not so much trouble to achieve.

How do you disallow crawling on origin server and yet have the robots.txt propagate properly?

I've come across a rather unique issue. If you deal with scaling large sites and work with a company like Akamai, you have origin servers that Akamai talks to. Whatever you serve to Akamai, they will propagate on their cdn.
But how do you handle robots.txt? You don't want Google to crawl your origin. That can be a HUGE security issue. Think denial of service attacks.
But if you serve a robots.txt on your origin with "disallow", then your entire site will be uncrawlable!
The only solution I can think of is to serve a different robots.txt to Akamai and to the world. Disallow to the world, but allow to Akamai. But this is very hacky and prone to so many issues that I cringe thinking about it.
(Of course, origin servers shouldn't be viewable to the public, but I'd venture to say most are for practical reasons...)
It seems an issue the protocol should be handling better. Or perhaps allow a site-specific, hidden robots.txt in the Search Engine's webmaster tools...
Thoughts?
If you really want your origins not to be public, use a firewall / access control to restrict access for any host other than Akamai - it's the best way to avoid mistakes and it's the only way to stop the bots & attackers who simply scan public IP ranges looking for webservers.
That said, if all you want is to avoid non-malicious spiders, consider using a redirect on your origin server which redirects any requests which don't have a Host header specifying your public hostname to the official name. You generally want something like that anyway to avoid issues with confusion or search rank dilution if you have variations of the canonical hostname. With Apache this could use mod_rewrite or even a simple virtualhost setup where the default server has RedirectPermanent / http://canonicalname.example.com/.
If you do use this approach, you could either simply add the production name to your test systems' hosts file when necessary or also create and whitelist an internal-only hostname (e.g. cdn-bypass.mycorp.com) so you can access the origin directly when you need to.

Is there a way to ensure that an ASP.NET application is (only) running on the HTTPS protocol?

I'm wondering if there is a way to ensure that an ASP.NET application can only be run using the HTTPS protocol
I'm fine with any code (defensive programming measure perhaps?) that can do the trick, or possibly some IIS/web server setting that can get the job done.
IIS will definitely allow you to require HTTPS. The instructions are here.
Edit: I had to go dig for it, but there's also Request.IsSecureConnection for defensive programming.
The only problem with enforcing the SSL on the IIS level is that the user receives an ugly 403.4 page error
"The page must be viewed over a secure channel"
To make the transition seamless, you could redirect the user to the secure site using the Request.IsSecureConnection if they do not generate the request over SSL.
There is a nice article that has some good information and a helper utility class on this subject over at leastprivilege.com

Resources