I am trying to make load test for Flex application, but some error appear during replay. I can't understand why it's happening. I have done all correlations for DSID,
I'm using latest version of Externalizable Objects:
- flex-messaging-common
- flex-messaging-core
- flex-messaging-data
- flex-messaging-data-req
- flex-messaging-opt
- flex-messaging-proxy
- flex-messaging-remoting
I have rerecorded script and I tried to find differences, but I didn't found anything significant. It's very confusing because first 20 amf_calls works fine.
Error Output:
Error:Decoding of AMF message failed. Error is : ReadValue Failed due Insufficient data to read at location 6
I'm using Loadrunner 12, Flex protocol, IE 9.
Edit. The line before error:
Warning:HTTP status code 500 was returned by the server
HTTP 500 is typically associated with two states
(1) Missed correlation
(2) Application "coming off the tracks" where an unexpected response is sent back, but no check is made for an acceptable response. One to two requests downstream you will have a 500 thrown for a request which is out of context with the state of the data or the application.
Record the application twice. Compare the scripts. Make sure you have handled all of the dynamic data.
Record a third time, change the data record used. Make sure you have covered all of the business process specific dynamic elements
Record a fourth time with a different sign on credentials set. Compare to previous recordings and make sure you have covered all of the dynamic elements related to user credentials.
Odds are you have missed a dynamic element related to the business process which is outside of Flex session and state items
Related
I'm developing a dashboard site with Next.js, which allows customization of a bunch of parameters, including date range, region etc. The data needs to be fetched based on these parameters, and is implemented by calling the endpoint again when state changes.
There is quite a lot of data (usually takes 6-ish seconds for the client to get), but there are times in production when the initial load of the page takes over 30 seconds. The chances of something else being wrong are hopefully high (so I can get these horrendous times down), but I wanted to use try to use SSR to render the page with the initial parameters, to at least speed it up a bit.
After some fiddling, this seems harder than I expected. Due to "Error: Rendered more hooks than during the previous render" passing the initial state of the server and comparing it to the initial client state to prevent calling the endpoint we've already called isn't possible (I'm using RTKQuery, so the endpoint calls are hooks).
We already cache the endpoints, so that wouldn't be an issue, but we would pretty much be back to just CSR at that point right? Is there a use case for SSR in applications with "dynamic" data? If so, how would you go about implementing it?
On a new website, I've an huge formular(meaning really big, needs at least 15-20min to finish it), that configure the whole website for one client for the next year.
It's distributed between several tabs(it's a wizard). Every time we go to the next tab, it makes a regular(non ajax) call to the server that generate the next "page". The previous informations are stored in the session(an object with a custom binder).
Everything was working fine until we test it today with all real data. Real data needs reflexion, work to find correct elements, ... And it takes times.
The problem we got is that the View receive a Model partialy empty. The session duration is set to 1440 minutes(and in IIS too). For now what I know is that I get a NullException the first time I try to access the Model into my view.
I'm checking the controller since something like 1 hour, but it's just impossible it gives a null model. If I put all those data very fast, I don't have any problem(but it's random data).
For now I did only manage to reproduce this problem on the IIS server, and I'm checking elmah logs to debug it, so it's not so easy to reproduce it.
Have you just any idea about how should I debug this? I'm a little lost here
I think you should assume session does not offer reliable persistence. I am not sure about details but I guess it will start freeing some elements when it exceeds its memory limit.
You will be safer if you use database to store that information or you could introduce your own implementation for persisting state.
in addition to ans provided by #Ufuk
you can easily send an ajax request every 1 minute which would actually do nothing but by doing this the session wont get expired and site will continue to run in extended periods
The problem was that the sessions wasn't having enough space I think. I resolved temporary my problem by restarting the application pool. Still searching a solution that will not implies to changes all this code. Maybe with other mode of session states, but I need to make my models serializable.
I've got a site that uses an order entry form and sends a rather decently sized POST request when the form is submitted.
However, when a particular value is passed in one of our form variables (OrderDetail), every time without fail, it gets an error page in the browser and a 504 error via Fiddler.
Here are a couple examples of tests I ran last night sending POST requests through Fiddler. When the "OrderDetail=" value is changed to the below it will either submit successfully or return a 504 error after a few seconds:
These ones FAIL:
&OrderDetail=Deliver+Writ+of+Execution%3B+and+Application+for+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff+DASH+Court+Services+Division+per+instructions
&OrderDetail=Deliver+Execution+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff+DASH+Court+Services+Division+per+instructions
&OrderDetail=Deliver+Writ+of+Execution%3B+and+Application+for+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff
&OrderDetail=Deliver+Writ+of+Execution%3B+Application+for+Earnings+Withholding+Order+to+Los+Angeles+County+Sheriff
&OrderDetail=Writ+of+Withholding+Execution+Order+Los+Angeles+County+Sheriff
&OrderDetail=writ+Execution+adsfsdfsdfsd+Order+County
&OrderDetail=wd+Execution+adsfsdfsdfsd+Order+Count
This got me thinking that perhaps it has to do with the words "Exec" ('Exec' and 'Execution' throw errors, 'Exe' does not) and "Count" ('County' and 'Count' throw errors, 'Cont' does not)
However, I haven't seen anything this specific mentioned in google searches regarding the 504 error.
Regarding the Coldfusion code around this, there is nothing fancy for this page. Just a standard form post. I added a cfmail test in the Application file and on these failures it is never ran, so this seems to be between the browser and IIS. We're on a shared server, so I can't see too much there, though.
Oddly enough, when the &OrderDetail= param is changed to one of these values (very similar to the above), the result is success:
&OrderDetail=wd+Execution+adsfsdfsdfsd+Order+Coun
&OrderDetail=wd+Execution+adsfsdfsdfsd+Order+Conty
&OrderDetail=Writ+of+Withholding+Order+Execution+Los+Angeles+County+Sheriff
&OrderDetail=Writ+of+Withholding+ExecutionOrder+Los+Angeles+County+Sheriff
In the 3rd one, I put 'Order' BEFORE 'Execution' and it works..
The total length of this POST request is about 4720 characters. I've increased the length of this one field to 5-6 times its length and they passed, so it almost seems tied to the value of the "&OrderDetail" param in the POST.
Any ideas on why this specific data could be an issue for a web server? I've never seen this before and it doesn't continue to be a problem for nearly any other request going through.
One interesting note as well: In the POST request, this variable is pretty close to the start of the param list. If I delete everything after it, it goes with no problem. Although I haven't been able to nail down what in the subsequent lines could be causing it. I can post the entire request if it will help.
More importantly though, I just want to know what could qualify as "reserved" or "illegal" for FORM data. Everything appears to be escaped properly so I'm not sure what else can be done here except for some pre-processing javascript to further escape any such words.
Thanks!
Given that EXEC and COUNT are causing the error, whilst putting ORDER before EXEC is preventing the error, this sounds like something is making a flawed attempt at protecting from SQL injection attacks.
If you have any software in place that claims to do that, I would see if (temporarily) disabling it stops the problem from occurring.
(This software might be at the firewall level, so you may need to talk to your sys admins.)
Importantly, I would also check your codebase for where OrderDetail is used, and make sure that it is using cfqueryparam whenever it is used inside a query - and the same goes for all other user-supplied data.
This question is somewhat of a follow up to How serious is this new ASP.NET security vulnerability and how can I workaround it? So if my question seems to be broken read over this question and its accepted solution first and then take that into the context of my question.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
I see why adding the delay to the error page is somewhat relevant but doesn't this also just add another layer to fool the script into thinking the site is an invalid target?
What could be done to prevent this if the script takes into account that since the site is asp.net it's running the AES encryption that it ignores the timing of error pages and watches the redirection or lack of redirection as the response vector? If a script does this will that mean there's NO WAY to stop it?
Edit: I accept the timing attack reduction but the error page part is what really seems bogus. This attack vector puts their data into viewstate. There's only 2 cases. Pass. Fail.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Summary of this vulnerability
The cipher key from the WebResoure.axd / ScriptResource.axd is taken and the first guess of the validation key is used to generate a value of potential key with the ciphered text.
This value is passed to the WebResource.axd / ScriptResource.axd at this point if the decryption key was guessed correctly their response will be accepted but since the data is garbage that it's looking for the WebResource.axd / ScriptResource.axd will return a 404 error.
If the decryption key was not successfully guessed it will get a 500 error for the padding invalid exception. At this point the attack application knows to increment the potential decryption key value and try again repeating until it finds the first successful 404 from the WebResource.axd / ScriptResource.axd
After having successfully deduced the decryption key this can be used to exploit the site to find the actual machine key.
re:
How does this have relevance on whether they're redirected to a 200, 404 or 500? No one can answer this, this is the fundamental question. Which is why I call shenanigans on needing to do this tom foolery with the custom errors returning a 200. It just needs to return the same 500 page for both errors.
I don't think that was clear from the original question, I'll address it:
who said the errors need to return 200? that's wrong, you just need all the errors to return the same code, making all errors return 500 would work as well. The config proposed as a work around just happened to use 200.
If you don't do the workaround (even if its your own version that always returns 500), you will see 404 vs. 500 differences. That is particularly truth in webresource.axd and scriptresource.axd, since the invalid data decrypted is a missing resource / 404.
Just because you don't know which feature had the issue, doesn't mean there aren't features in asp.net that give different response codes in different scenarios that relate to padding vs. invalid data. Personally, I can't be sure if there is any other feature that gives different response code as well, I just can tell you those 2 do.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Sri already answered that very clearly in the question you linked to.
Its not about hiding than an error occurred, is about making sure the attacker can't tell the different between errors. Specifically is about making sure the attacker can't determine if the request failed because it couldn't decrypt /padding was invalid, vs. because the decrypted data was garbage.
You could argue: well but I can make sure it isn't garbage to the app. Sure, but you'd need to find a mechanism in the app that allows you to do that, and the way the attack works you Always need at least a tiny bit of garbage in the in message. Consider these:
ScriptResource and WebResource both throw, so custom error hides it.
View state is by default Not encrypted, so by default its Not involved the attack vector. If you go through the trouble of turning the encryption on, then you very likely set it to sign / validate it. When that's the case, the failure to decrypt vs. the failure to validate is the same, so the attacker again can't know.
Auth ticket also signs, so its like the view state scenario
Session cookies aren't encrypted, so its irrelevant
I posted on my blog how the attack is getting so far like to be able to forge authentication cookies.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
As mentioned above, you need to find a mechanism that behaves that way i.e. decrypted garbage stays on the same page instead of throwing an exception / and thus getting you to the same error page.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Read what I mentioned about the view state above. Also note that the ability to more accurately re-encrypt is gained After they gained the ability to decrypt. That said, as mentioned above, by default view state is not that way, and when its on its usually accompanied with signature/validation.
I am going to elaborate on my answer in the thread you referenced.
To pull off the attack, the application must respond in three distinct ways. Those three distinct ways can be anything - status codes, different html content, different response times, redirects, or whatever creative way you can think of.
I'll repeat again - the attacker should be able to identify three distinct responses without making any mistake, otherwise the attack won't work.
Now coming to the proposed solution. It works, because it reduces the three outcomes to just two. How does it do that? The catch-all error page makes the status code/html/redirect all look identical. The random delay makes it impossible to distinguish between one or the other solely on the basis of time.
So, its not a lie, it does work as advertised.
EDIT : You are mixing things up with brute force attack. There is always going to be a pass/fail response from the server, and you are right it can't be prevented. But for an attacker to use that information to his advantage will take decades and billions of requests to your server.
The attack that is being discussed allows the attacker to reduce those billions of requests into a few thousands. This is possible because of the 3 distinct response states. The workaround being proposed reduces this back to a brute-force attack, which is unlikely to succeed.
The workaround works because:
You do not give any indication about "how far" the slightly adjusted took you. If you get another error message, that is information you can learn from.
With the delay you hide how long the actual calculation took. So you do not get information, that shows if you got deeper into the system, that you can learn from.
No, it isn't a big lie. See this answer in the question you referenced for a good explanation.
I've made some performance improvements to my application's backend, and to show the benefit to the end users of the GUI, we've been using the Trace.axd page to take timings. (The frontend is .Net 1.1 and the backend is Java, connected via Web services.)
However, these timings show no difference between the old and the new backends.
By putting a breakpoint in the backend and holding a request there for 30 seconds, I can see from Trace.axd that the POST is taking 3ms, and the GET is taking 4s. I'm missing about 26s...
The POST is where the performance improvement should be, but the timing on the Trace page seems to only include the time it takes to send the request, not the time it takes to return.
Is there a way to inrease the granularity of the information in the trace to include the whole of the request? Or is there another way to take the measurements I need?
OK, I kind of got what I wanted in the end. The problem is the IIS Trace doesn't include the time the POST takes to return.
I found that I could use Trace.Write() to add custom entries to the trace log, and even add a category, using Trace.Write(string category, string message).
Adding a call to Trace.Write() in my code that executes after the POST has completed gives me a better figure.
Still, it's not ideal as it's custom, and it's down to me to put it as near to the end of the POST cycle as possible.
I'm not sure how you're making the requests on the .NET side, but I'll assume that there's a HttpWebRequest involved somewhere. I'd expect HttpWebRequest.GetResponse() to return as soon as it has received the response headers. That way, you can start processing the start of a large response while the rest is still downloading. If your trace messages are immediately before and after a call to GetResponse, you therefore wouldn't see the whole execution time of the backend. Can you add a trace message immediately after you close the response?
It is abnormal that the trace output doesn't show all the time you've spent in the brakpoint. Did you check the total time column to see if it matches the time you've spent in the request ? Don't forget that one of the columns only shows the time spent since the preceeding trace statement.
If you want more granular data in the trace output, you can add your own. The TraceContext class has two methods, Warn and Write that add lines to the output (warn adds it in red).
The TraceContext is accessible from every page or control: just use this.Trace.Warn() or this.Trace.Write() (and I think it is also accessible through the HttpContext class).