What specific data being sent via HTTP POST could result in an HTTP 504 Error? - http

I've got a site that uses an order entry form and sends a rather decently sized POST request when the form is submitted.
However, when a particular value is passed in one of our form variables (OrderDetail), every time without fail, it gets an error page in the browser and a 504 error via Fiddler.
Here are a couple examples of tests I ran last night sending POST requests through Fiddler. When the "OrderDetail=" value is changed to the below it will either submit successfully or return a 504 error after a few seconds:
These ones FAIL:
&OrderDetail=Deliver+Writ+of+Execution%3B+and+Application+for+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff+DASH+Court+Services+Division+per+instructions
&OrderDetail=Deliver+Execution+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff+DASH+Court+Services+Division+per+instructions
&OrderDetail=Deliver+Writ+of+Execution%3B+and+Application+for+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff
&OrderDetail=Deliver+Writ+of+Execution%3B+Application+for+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff
&OrderDetail=Writ+of+Withholding+Execution+Order+Los+Angeles+County+Sheriff
&OrderDetail=writ+Execution+adsfsdfsdfsd+Order+County
&OrderDetail=wd+Execution+adsfsdfsdfsd+Order+Count
This got me thinking that perhaps it has to do with the words "Exec" ('Exec' and 'Execution' throw errors, 'Exe' does not) and "Count" ('County' and 'Count' throw errors, 'Cont' does not)
However, I haven't seen anything this specific mentioned in google searches regarding the 504 error.
Regarding the Coldfusion code around this, there is nothing fancy for this page. Just a standard form post. I added a cfmail test in the Application file and on these failures it is never ran, so this seems to be between the browser and IIS. We're on a shared server, so I can't see too much there, though.
Oddly enough, when the &OrderDetail= param is changed to one of these values (very similar to the above), the result is success:
&OrderDetail=wd+Execution+adsfsdfsdfsd+Order+Coun
&OrderDetail=wd+Execution+adsfsdfsdfsd+Order+Conty
&OrderDetail=Writ+of+Withholding+Order+Execution+Los+Angeles+County+Sheriff
&OrderDetail=Writ+of+Withholding+ExecutionOrder+Los+Angeles+County+Sheriff
In the 3rd one, I put 'Order' BEFORE 'Execution' and it works..
The total length of this POST request is about 4720 characters. I've increased the length of this one field to 5-6 times its length and they passed, so it almost seems tied to the value of the "&OrderDetail" param in the POST.
Any ideas on why this specific data could be an issue for a web server? I've never seen this before and it doesn't continue to be a problem for nearly any other request going through.
One interesting note as well: In the POST request, this variable is pretty close to the start of the param list. If I delete everything after it, it goes with no problem. Although I haven't been able to nail down what in the subsequent lines could be causing it. I can post the entire request if it will help.
More importantly though, I just want to know what could qualify as "reserved" or "illegal" for FORM data. Everything appears to be escaped properly so I'm not sure what else can be done here except for some pre-processing javascript to further escape any such words.
Thanks!

Given that EXEC and COUNT are causing the error, whilst putting ORDER before EXEC is preventing the error, this sounds like something is making a flawed attempt at protecting from SQL injection attacks.
If you have any software in place that claims to do that, I would see if (temporarily) disabling it stops the problem from occurring.
(This software might be at the firewall level, so you may need to talk to your sys admins.)
Importantly, I would also check your codebase for where OrderDetail is used, and make sure that it is using cfqueryparam whenever it is used inside a query - and the same goes for all other user-supplied data.

Related

how GET method is idempotent

How GET method is idempotent and POST is not. we are using it in form submission, if we submitting it twice it will re-submitting the form data's. And why we are not using GET for order placing or purchasing products for instance when it is idempotent.
An idempotent HTTP method is a HTTP method that can be called many times without different outcomes. It would not matter if the method is called only once, or ten times over. The result should be the same. Again, this only applies to the result, not the resource itself.
a=10; //This is idempotent: no matter how many times we execute this statement, a will always be 4.
a++; //This is not idempotent. Executing this 10 times will result in a different outcome as when running 5 times.
Now, coming to your query.
If we use GET method for order placing/purchasing products, the order will be placed no matter the product is gone out of stock. In contrast if you use the to POST method the result will be different for each new request made for purchasing product.
Below example is not idempotent because for every new request the outcome will be different
https://accounts.google.com/Login#identifier
The GET method should be used to send the information from the browser to the server in the URL. Below is an example usage of GET method.
http://www.google.co.in/search?q=cristiano+ronaldo
Below is the answer to your query in the comments:
When users revisit a page that resulted from a form submission, they might be presented with the page from their history stack (which they had probably intended), or they might be told that the page has now expired. Typical user response to the latter is to hit Reload.
This is harmless if the request is idempotent, which the form author signals to the browser by specifying the GET method.
Browsers typically will (indeed "should") caution their users if they are about to resubmit a POST request, in the belief that this is going to cause a further "permanent change in the state of the universe", e.g. ordering another Mercedes-Benz against their credit card or whatever. If users get so accustomed to this happening when they try to reload a harmless idempotent request, then sooner or later it's going to bite them when they casually [OK] the request.
Now, while implementing those two methods GET and POST, a developer should consider the security issues and write the code in the particular method. Any code can be written in both the methods considering all the limits of GET method(size of url etc.), But this is not a good practice.
GET -> for information retrieval.(If you want to read data without changing state)
POST -> for information creation/updation/deletion.

Sys.WebForms.PageRequestManagerServerErrorException 12031. Out of ideas

In a recent project we are currently getting 12031 errors. here is the complete error:
Sys.WebForms.PageRequestManagerServerErrorException 12031
the status code returned from the
server was 12031
The problem is, this doesn't happen all the time and we are unable to reproduce the error on development environment.
We use AJAX in our application and this exception happens on every page once in a while.
I've found a post on SO with the same problem and tried changing maxRequestLength to "1" to see if I constantly get the same error but I don't. Instead, I'm getting
Maximum request length exceeded.
So I'm starting to think that it is not related to maxRequestLength. I'm actually out of ideas. I have a ScriptManager in my MasterPage and its AsyncPostBackTimeout="240". That is the same amount of time (give or take). I get the 12031 error after 3,5 minutes of "nothing". I'm logging one of the pages and by logging, I mean logging every section of the page like "Page_Load is called" "xyz is called" etc and I have like 15 spots on the page for this. After user clicks a button and ScriptManager tries to do its job, no postback occurs, no logging happens. It is like the page wants to do a postback but too old to do it. Tries this for around 3,5 minutes and fails with the given error.
Please, if you have any ideas, HELP ME OUT .
Thank you
That error almost certainly has nothing to do with the size of the response, AsyncPostBackTimeout, or maxRequestLength.
Connection resets are usually indicative of poor network connectivity or a server loaded down to its capacity limits. A few things you could try:
Inspect the Windows Event Log during the time(s) that the kiosks were known to have received the error. Look for any relevant errors or warnings.
If feasible, ask the kiosk staff to use something like Pingtest to test the quality of their local network connection at the time that they receive the error in your app.
Use a service like Pingdom to ensure that the server itself isn't intermittently losing connectivity.
This error may be due to the HTTP Runtime limitation of the maxRequestLength. The default value is 4096.
Try adding (or editing) the following entry in your Web.Config:
"<httpRuntime maxRequestLength="8192" />" (effectively allowing 8mb of data transmission, instead of the default 4mb).
Please not....You can set data as per you max request. 8192 is not the limit. Also you need to add Page.Form.Attributes.Add("enctype", "multipart/form-data"); in Page_Load event of the page.
You'll want to enter this in the System.Web configuration section.
How do I avoid getting a PageRequestManagerParserErrorException?
To start with, don't do anything from the preceding list! Here's a matching list of how to avoid a given error (when possible):
Calls to Response.Write():
Place an or similar control on your page and set its Text property. The added benefit is that your pages will be valid HTML. When using Response.Write() you typically end up with pages that contain invalid markup.
Response filters:
The fix might just be to not use the filter. They're not used very often anyway. If possible, filter things at the control level and not at the response level.
HttpModules:Same as response filters.
Server trace is enabled:
Use some other form of tracing, such as writing to a log file, the Windows event log, or a custom mechanism.
Calls to Server.Transfer():
I'm not really sure why people use Server.Transfer() at all. Perhaps it's a legacy thing from Classic ASP. I'd suggest using Response.Redirect() with query string parameters or cross-page posting.
Another way to avoid the parse error is to do a regular postback instead of an asynchronous postback. For example, if you have a button that absolutely must do a Server.Transfer(), make it do regular postbacks. There are a number of ways of doing this:
The easiest is to simply place the button outside of any UpdatePanels. Unfortunately the layout of your page might not allow for this.
Add a PostBackTrigger to your UpdatePanel that points at the button. This works great if the button is declared statically through markup on the page.
Call ScriptManager.RegisterPostBackControl() and pass in the button in question. This is the best solution for controls that are added dynamically, such as those inside a repeating template.
Good luck!
I have received this error before when we had a Barracuda Device sitting in front of our website. It was a maximum request length issue because Barracuda protects against overloading the request size. We removed the device temporarily and it solved the problem. Not sure if this is your problem.

Is the ASP.NET cryptographic vulnerability work around a BIG LIE?

This question is somewhat of a follow up to How serious is this new ASP.NET security vulnerability and how can I workaround it? So if my question seems to be broken read over this question and its accepted solution first and then take that into the context of my question.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
I see why adding the delay to the error page is somewhat relevant but doesn't this also just add another layer to fool the script into thinking the site is an invalid target?
What could be done to prevent this if the script takes into account that since the site is asp.net it's running the AES encryption that it ignores the timing of error pages and watches the redirection or lack of redirection as the response vector? If a script does this will that mean there's NO WAY to stop it?
Edit: I accept the timing attack reduction but the error page part is what really seems bogus. This attack vector puts their data into viewstate. There's only 2 cases. Pass. Fail.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Summary of this vulnerability
The cipher key from the WebResoure.axd / ScriptResource.axd is taken and the first guess of the validation key is used to generate a value of potential key with the ciphered text.
This value is passed to the WebResource.axd / ScriptResource.axd at this point if the decryption key was guessed correctly their response will be accepted but since the data is garbage that it's looking for the WebResource.axd / ScriptResource.axd will return a 404 error.
If the decryption key was not successfully guessed it will get a 500 error for the padding invalid exception. At this point the attack application knows to increment the potential decryption key value and try again repeating until it finds the first successful 404 from the WebResource.axd / ScriptResource.axd
After having successfully deduced the decryption key this can be used to exploit the site to find the actual machine key.
re:
How does this have relevance on whether they're redirected to a 200, 404 or 500? No one can answer this, this is the fundamental question. Which is why I call shenanigans on needing to do this tom foolery with the custom errors returning a 200. It just needs to return the same 500 page for both errors.
I don't think that was clear from the original question, I'll address it:
who said the errors need to return 200? that's wrong, you just need all the errors to return the same code, making all errors return 500 would work as well. The config proposed as a work around just happened to use 200.
If you don't do the workaround (even if its your own version that always returns 500), you will see 404 vs. 500 differences. That is particularly truth in webresource.axd and scriptresource.axd, since the invalid data decrypted is a missing resource / 404.
Just because you don't know which feature had the issue, doesn't mean there aren't features in asp.net that give different response codes in different scenarios that relate to padding vs. invalid data. Personally, I can't be sure if there is any other feature that gives different response code as well, I just can tell you those 2 do.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Sri already answered that very clearly in the question you linked to.
Its not about hiding than an error occurred, is about making sure the attacker can't tell the different between errors. Specifically is about making sure the attacker can't determine if the request failed because it couldn't decrypt /padding was invalid, vs. because the decrypted data was garbage.
You could argue: well but I can make sure it isn't garbage to the app. Sure, but you'd need to find a mechanism in the app that allows you to do that, and the way the attack works you Always need at least a tiny bit of garbage in the in message. Consider these:
ScriptResource and WebResource both throw, so custom error hides it.
View state is by default Not encrypted, so by default its Not involved the attack vector. If you go through the trouble of turning the encryption on, then you very likely set it to sign / validate it. When that's the case, the failure to decrypt vs. the failure to validate is the same, so the attacker again can't know.
Auth ticket also signs, so its like the view state scenario
Session cookies aren't encrypted, so its irrelevant
I posted on my blog how the attack is getting so far like to be able to forge authentication cookies.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
As mentioned above, you need to find a mechanism that behaves that way i.e. decrypted garbage stays on the same page instead of throwing an exception / and thus getting you to the same error page.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Read what I mentioned about the view state above. Also note that the ability to more accurately re-encrypt is gained After they gained the ability to decrypt. That said, as mentioned above, by default view state is not that way, and when its on its usually accompanied with signature/validation.
I am going to elaborate on my answer in the thread you referenced.
To pull off the attack, the application must respond in three distinct ways. Those three distinct ways can be anything - status codes, different html content, different response times, redirects, or whatever creative way you can think of.
I'll repeat again - the attacker should be able to identify three distinct responses without making any mistake, otherwise the attack won't work.
Now coming to the proposed solution. It works, because it reduces the three outcomes to just two. How does it do that? The catch-all error page makes the status code/html/redirect all look identical. The random delay makes it impossible to distinguish between one or the other solely on the basis of time.
So, its not a lie, it does work as advertised.
EDIT : You are mixing things up with brute force attack. There is always going to be a pass/fail response from the server, and you are right it can't be prevented. But for an attacker to use that information to his advantage will take decades and billions of requests to your server.
The attack that is being discussed allows the attacker to reduce those billions of requests into a few thousands. This is possible because of the 3 distinct response states. The workaround being proposed reduces this back to a brute-force attack, which is unlikely to succeed.
The workaround works because:
You do not give any indication about "how far" the slightly adjusted took you. If you get another error message, that is information you can learn from.
With the delay you hide how long the actual calculation took. So you do not get information, that shows if you got deeper into the system, that you can learn from.
No, it isn't a big lie. See this answer in the question you referenced for a good explanation.

Am I using the cache correctly?

I have a page where I am pulling a dataset from the database, a few thousand records. I get it when the page is loaded and store it in the cache. Each time an operation is performed on the page, I check the cache to see if its still there, and if not, go get it again (20 minute expiration); fairly typical setup.
When I run the page, the initial data loads fine, and a default RowFilter is applied to the data. When I change the value of a dropdown (which changes the RowFilter), the page hangs for a moment, then returns a javascript error:
Line: 80772370 (yes, thats line 80 million...)
Char: 17
Error: Syntax error
Code: 0
URL: -the url of the page I'm on-
This error is repeated EXACTLY 20 times.
When I re-run the page and the operation that renders that error, I get a different line number (for example, the next time I ran it after I posted the above message, the line is at 80718666), exactly 20 times again.
Now a few curveballs:
I was having the exact same issues when I was using the Session to store the data rather than the cache.
I do not have this problem in the development environment (this is happening in QA). The web.config for each environment are nearly identical, but the primary difference between them is that QA uses a sessionState server that is separate from itself. This is why I moved from Session variables to the cache in the first place.
When the search criteria is intended to return no results at all, it performs as it should (shows no results).
Now this hasn't exactly been my best week, so maybe I'm missing something big, but I could use some guidance.
Thanks SO community.
if you use UpdatePanel, remove it to see what's the real error is because now the error is hidden on a javascript return string, in the position you mention.
After you find your error, bring up again the UpdatePanel.
I can assume that the error is a null object/control that have been cached, and you forget to check if it not null.
When you cache parts on your page, and controls, then you need to check them on your back code, if they are null before use them.

GET vs. POST does it really really matter?

Ok, I know the difference in purpose. GET is to get some data. Make a request and get data back. POST should be used for CRUD operations other than read I believe. But when it comes down to it, does the server really care if it's receiving a GET vs. POST in the end?
According to the HTTP RFC, GET should not have any side-effects, while POST may have side-effects.
The most basic example of this is that GET is not appropriate for anything like a purchase-transaction or posting an article to a blog, while POST is appropriate for actions-that-have-consequences.
By the RFC, you can hold a user responsible for actions done by POST (such as a purchase), but not for GET actions. 'Bots always use GET for this reason.
From the RFC 2616, 9.1.1:
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in
their interactions over the Internet,
and should be careful to allow the
user to be aware of any actions they
might take which may have an
unexpected significance to themselves
or others.
In particular, the convention has
been established that the GET and
HEAD methods SHOULD NOT have the
significance of taking an action
other than retrieval. These methods
ought to be considered "safe". This
allows user agents to represent other
methods, such as POST, PUT and
DELETE, in a special way, so that the
user is made aware of the fact that
a possibly unsafe action is being
requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore
cannot be held accountable for them.
It does if a search engine is crawling the page, since they will be making GET requests but not POST. Say you have a link on your page:
http://www.example.com/items.aspx?id=5&mode=delete
Without some sort of authorization check performed before the delete, it's possible that Googlebot could come in and delete items from your page.
Since you're the one writing the server software (presumably), then it cares if you tell it to care. If you handle POST and GET data identically, then no, it doesn't.
However, the browser definitely cares. Refreshing or clicking back to a page you got as a response to a POST pops up the little "Are you sure you want to submit data again" prompt, for example.
GET has data limit restrictions based on the sending browser:
The spec for URL length does not dictate a minimum or maximum URL length, but implementation varies by browser. On Windows: Opera supports ~4050 characters, IE 4.0+ supports exactly 2083 characters, Netscape 3 -> 4.78 support up to 8192 characters before causing errors on shut-down, and Netscape 6 supports ~2000 before causing errors on start-up
If you use a GET request to alter back-end state, you run the risk of bad things happening if a webcrawler of some kind traverses your site. Back when wikis first became popular, there were horror stories of whole sites being deleted because the "delete page" function was implemented as a GET request, with disastrous results when the Googlebot came knocking...
"Use GET if: The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup)."
"Use POST if: The interaction is more like an order, or the interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or the user be held accountable for the results of the interaction."
source
You be aware of a few subtle security differences. See my question
GET versus POST in terms of security?
Essentially the important thing to remember is that GET will go into the browser history and will be transmitted through proxies in plain text, so you don't want any sensitive information, like a password in a GET.
Obvious maybe, but worth mentioning.
By HTTP specifications, GET is safe and idempotent and POST is neither. What this means is that a GET request can be repeated multiple times without causing side effects.
Even if your server doesn't care (and this is unlikely), there may be intermediate agents between your client and the server, all of whom have this expectation. For example proxies to cache data at your ISP or other providers for improved performance. THe same expectation is true for accelerators, for example, a prefetching plugin for your browser.
Thus a GET request can be cached (based on certain parameters), and if it fails, it can be automatically repeated without any expecation of harmful effects. So, really your server should strive to fulfill this contract.
On the other hand, POST is not safe, not idempotent and every agent knows not to cache the results of a POST request, or retry a POST request automatically. So, for example, a credit card transaction would never, ever be a GET request (you don't want accounts being debited multiple times because of network errors, etc).
That's a very basic take on this. For more information, you might consider the "RESTful Web Services" book by Ruby and Richardson (O'Reilly press).
For a quick take on the topic of REST, consider this post:
http://www.25hoursaday.com/weblog/2008/08/17/ExplainingRESTToDamienKatz.aspx
The funny thing is that most people debate the merits of PUT v POST. The GET v POST issue is, and always has been, very well settled. Ignore it at your own peril.
GET has limitations on the browser side. For instance, some browsers limit the length of GET requests.
I think a more appropriate answer, is you can pretty much do the same things with both. It is not so much a matter of preference, however, but a matter of correct usage. I would recommend you use you GETs and POSTs how they were intended to be used.
Technically, no. All GET does is post the stuff in the first line of the HTTP request, and POST posts stuff in the body.
However, how the "web infrastructure" treats the differences makes a world of difference. We could write a whole book about it. However, I'll give you some "best practises":
Use "POST" for when your HTTP request would change something "concrete" inside the web server. Ie, you're editing a page, making a new record, and so on. POSTS are less likely to be cached, or treated as something that's "repeatable without side-effects"
Use "GET" for when you want to "look at an object". Now, such a look might change something "behind the scenes" in terms of caching or record keeping, but it shouldn't change anything "substantial". Ie, I could repeat my GET over and over and nothing bad would happen, except for inflated hit counts. GETs should be easily bookmarkable, so a user can go back to that same object later on.
The parameters to the GET (the stuff after the ?, traditionally) should be considered "attributes to the view" or "what to view" and so on. Again, it shouldn't actually change anything: use POST for that.
And, a final word, when you POST something (for example, you're creating a new comment), have the processing for the post issue a 302 to "redirect" the user to a new URL that views that object. Ie, a POST processes the information, then redirects the browser to a GET statement to view the new state. Displaying information as a result of a POST can also cause problems. Doing the redirection is often used, and makes things work better.
Should the user be able to bookmark the resulting page? Another thing to think about is some browsers/servers incorrectly limit the GET URI length.
Edit: corrected char length restriction note - thanks ars!
It depends on the software at the server end. Some libraries, like CGI.pm in perl handles both by default. But there are situations where you more or less have to use POST instead of GET, at least for pushing data to the server. Large amounts of data (where the corresponding GET url would become too long), binary data (to avoid lots of encoding/decoding trouble), multipart files, non-parsed headers (for continuous updates pre-AJAX style...) and similar.
The server technically couldn't care one way or the other about what kind of request it receives. It will blindly execute any request coming across the wire.
Which is the problem. If you have an action that destroys or modifies data in a GET action, Google will tear your site up as it crawls through indexing.
The server usually doesn't care. But it's mostly for following good practices, as you mentioned. The client side also matter - as mentioned you cannot bookmark a POST'd page usually, and some browsers have limits on the length of the URL for really long GET queries.
Since GET is intended for specifying resource you wanna get, depending on exact software on the server side, the web server (or the load balancer in front of it) may have a size limit on GET requests to prevent Denial Of Service attacks...
Be aware that browsers may cache GET requests but will generally not cache POST requests.
Yes, it does matter. GET and POST are quite different, really.
You are right in that normally, GET is for "getting" data from the server and displaying a page, while POST is for "posting" data back to the server. Internally, your scripts get the same data whether it's GET or POST, so no, the server doesn't really care.
The main difference is GET parameters are specified in URLs, while POST is not. This is why POST is used for signup and login forms - you don't want your password in a URL. Similarly, if you're viewing different pages or displaying a specific view of some data, you normally want a unique URL.
It really does matter. I have gathered like 11 things you should know abut them.
11 things you should know about GET vs POST
No, they shouldn't except for #jbruce2112 answer and uploading files require POST.

Resources