HTTP Post Header Data can be seen in web browser - asp.net

In Firefox using FireBug addon, the data you entered can be seen in POST headers. I wonder if this is a security flaw in the asp.net web application.

It's not. If a browser doesn't expose the headers, other tools will. They're public by nature except to the degree that https encryption keeps third parties from reading them.

Related

Which web server should http security headers be used in when web application is a consumer of a web service?

There is a web application and a web service. These are in separate web servers. Web application is a consumer of web service.
The questions is which web server should http security headers (e.g. Strict-Transport-Layer, X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, etc.) be used in?
In web service's web server or web application's web server that is consumer, or both of them as dual use? Which one is reasonable?
If you understand what each of them do, you will be able to tell which is needed where.
Strict-Transport-Security is about ensuring that clients only use https (and not plain http) to access content. This of course needs a compliant client. All browsers are like that, and hopefully some other clients also consider this header. Even if not, you should just send it from both services and webapps.
X-XSS-Protection is about explicitly enabling some cross-site-scripting protection in browsers (this practically means not running javascript on a page where the request contained the same javascript, to prevent some reflected XSS). It is still the best practice to send this for web apps, though it is now the default in browsers, and this does not really prevent more advanced XSS at all (the app itself needs to be correctly implemented). For backend services that only serve text/json and not text/html, this is irrelevant, also it it irrelevant if the client is not a browser (but for example a webapp). You can still send it from services too, it won't do any harm.
X-Frame-Options is to mitigate clickjacking among some other more niche attacks. It basically prevents the browser from opening the page in a frame. If the client is not a browser, this doesn't make a lot of sense, however, this might have implications on data leaks if used together with CORS headers. So again, you can just send this from services too, but not strictly necessary in the base case.
X-Content-Type-Options: nosniff is to mitigate an attack that used to performed against older versions of Internet Explorer (and maybe some other browsers as well) where they incorrectly determined file types and sent content type accordingly, especially during file downloads. I think this is no longer feasible with modern browsers, but the best practice is to still send this. It has no effect for non-browser clients, but does no harm either.
So in short, most of these are only relevant for web applications, and not backend services that do not serve html. However, you can and probably should still send these from services too, they will just do nothing in most cases, and might help when for example an attacker make a user open a service response in a browser somehow.

How to automate logging in and retrieve data?

I want to automate logging into a website and retrieving certain data.
I thought the way to do this would be to sniff the HTTP requests so I know where the login form is being POSTed to so I can do the same using NodeJS/Java/Python.
However I can't seem to find the HTTP request that handles it.
The site seems to use some Java-applet and a lot of Javascript.
This is the site: link
Should I have a different approach?
Also also wonder about storing a cookie session, and sending it with each HTTP request after logging in.
I'm sorry if I am not to clear, I will try to explain myself further and edit this post if needed.
You can use the developer console (hit F12) in Chrome (this works also in other browsers) and then click the "Network" tab. There you see all network calls.
To detect what http requests are performed from a mobile device, you can use a proxy like Charles Proxy.
Also be aware that if you post from nodejs the cookies won't be set in the users browser.

Is it safe to use protocol relative URL in email?

There are existing discussion [1] on the use of protocol relative URL in HTML, but how about email?
Will email client, or service providers like Gmail strip or modify protocol relative URL when they are used in HTML email?
[1] Can I change all my http:// links to just //?
I sent an email through Gmail with this content:
link
and it was received unmodified. When I right-clicked on the link to copy the link address, Chrome prepended https: to it (since Gmail uses secure HTTP), but when I inspected the element's HTML, it showed the <a> tag as I had written it.
It's not normal for email servers to change the contents of emails.
Omitting the protocol is intended to let a web browser choose between secure and insecure versions of the same content. If you load a page via https and it contains an image with an src beginning in http, the browser warns the user that it is dangerous to load insecure content -- a confusing and worrying message. If you load a page via http and it contains an image with an src beginning in https, that prevents caching among other inefficiencies.
The compromise is to allow the browser to load content with security matching the page that loads it -- efficiency for an insecure page; complete guarantee of security for a secure page.
But an email client always warns about embedded content (images, scripts, ...), meaning omitting the protocol has no benefit.
Furthermore, a non-browser email client doesn't have a protocol to begin with. It downloads information and then loads it from the disk. If you really want to let the email client choose to load embedded content with the security level with which it loaded the email, you'd let the client look for the information on the same computer. (They'll actually do that by assuming // means file:///.)
So is it safe to put a // URI in an email? I'd say it doesn't make sense; therefore, there has not become a standard way for non-browser clients to handle it, meaning you're looking at undefined behavior.
Better to choose the protocol based on the sensitivity of the information identified by the URI. Is it a chart of proprietary financial data? Use https. Is it a lolcat? Use http.
No , its not safe to use protocol relative URL in email. because its change protocol so that browser can fetch a resource from whatever protocol the site is telling it to use.
but some email clients (Outlook especially, as usual) won’t try to use HTTP or HTTPS as the protocol. Instead they’ll use the file:// protocol and assume the resource you’re referring to is on the local machine. But it won’t be. So don’t use these in emails.
You have to be sure that the server you’re requesting from is capable of serving content over both HTTP and HTTPS. If not you might end up fetching content from an unsecured or nonexistent server port.
IE6 does not know how to handle this. If you care about supporting Internet Explorer 6 then you shouldn’t use these.
IE7-8 support protocol relative URLs but they’ll end up fetching the resource twice. Once from HTTP and once over HTTPS. This can slow things down a bit but the way I see things it’s not much of a problem for anyone except the person using IE7-8 and if you’re using IE you’ve got more important things to worry about.
its browser dependent so its depends what browser you are using GMAIL working fine in crome but not in IE6.

which browsers do not send referer information?

which browsers do not send referer information?
This is not dependent on the browser make/version, but on the browser configuration. All decent browsers with default settings will send it, but the enduser can configure it to not send it. It's also dependent on the environmental software. If you have for example Norton AntiVirus/InternetSecurity installed, then you can configure it to let it block or spoof the referrer header with something entirely different, regardless of the browser used.
All the popular web browsers send referrer headers, at least by default. Some web browsers give their users the option to turn them off. (Example)
Referrer information not sent with a Flash http request
http://training.sessions.edu/resources/SoftwareDesignTips/current/flash.asp
For example, if someone clicks on flash banner linked to your site, request can come to your server without HTTP referrer information

Why are SOAP and GET disabled in asmx webservices by default?

I'm about to turn on the missing protocols for my asmx webservices. They're already behind two layers of authentication and has a role checking attribute, so otherwise it is secure.
This MS KB article explains GET and SOAP are disabled for asmx by default, while POST is enabled by default, but doesn't say why other than "security reasons." Is this just superstition? Why did they do that? It seems that having POST enabled is just as insecure as having GET enabled.
I suppose this reduced the attack surface, but disabling everything until someone invokes the webservice by a particular protocol would be even more secure than leaving POST enabled.
The actual link is INFO: HTTP GET and HTTP POST Are Disabled by Default .
The GET and POST protocols cannot support SOAP Headers. These are required by many services for security purposes.
Additionaly, these protocols are not used that often for pure SOAP Services (as the protocol specifies the use of POST). Having them open leaves a door open that nobody will be watching. Bad people may sneak in.

Resources