ExpectedResponseUrl error is Visual Studio 2010 web performance test - asp.net

I have recorded a Web Performance Test in Visual Studio 2010 for a web application that uses Windows Live ID with Secure Token Service for the authentication of the user.
There are two requests in my recording that go to https://login.live.com/ and they work fine on the Visual Studio Test Runner after I recorded the test, but then after about a day later I start to get an error message that states "The value of the ExpectedResponseUrl property does not equal the actual response URL" on the requests mentioned. I understand that I can turn off the Response URL validation rules, but that doesn't fix the root cause of the problem and only masks the symptoms.
I was wondering if anyone knows what is going on here? My guess is that there is cached data at the browser level, but I haven't been able to prove that one way or another.

Chances are you recorded a value in one of the post back fields that "expires" after a period of time. I am willing to be the url you are getting back from the request is now an error page.
If you are familiar with C#, I find it is easier to change the test to a coded test and have a really close look at all the fields that are being sent to login.live.com.
You would probably need a fair bit of information on how login.live.com works to really get to the bottom of it.

Related

How to close ASPSessionId in a Visual Studio Web/Performance Test

I have finally given up on this and I'm looking for some help on this. Here is what I have found so far.
First of all, web performance tests and/or load tests in visual studio do NOT use the browser (during playback it's not used, but it is used during the recording of the test) which is when/where the ASPSessionId is stored in a cookie or form post parameter.
I have web performance tests that have extraction rules to get the ASPSessionID from the server which I try to set in a later request as header/form post parameter, however this doesn't seem to help and it appears that I am just using the same one over and over causing the server to respond differently (presents different pages)
On the system I am testing a user will go to the site and fill out an application. If the user is in the same session the user can fill out multiple subsequent applications and have the ability to re-use some data. If that is true, the user is presented a page to select the re-usable data. If the session is new the user does not get to do this.
If I play the web test over and over manually, it works as expected (new session ID, no re-use data page presented). However, if I play that same test over and over in a load test, the first time it will pass and each time after it fails, because the session is kept open and then the server provides different pages than the ones that exist in my web performance test. The failures on the subsequent applications includes fails like (expected response URL, extraction rules...etc)
So I was using an extraction rule to get the ASPSessionID from the server and store it in a cookie and/or web form post parameters and then set it, but it is not working.
What can I do in the web performance test to successfully close the ASPSessionID so that the test runs like it is running for the first time in the load test?
In the LoadTest Test Mix, set the "Percentage of New Users" to 100. That completely solved it for me.

Application Insights removing telemetry after it has been logged

I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.

Secured WCF service timing out on 2nd invocation of client channel

We have a secured & authenticated WCF service which cannot use service references. Thus, we provide the interface for the contracts and open client channel manually.
We have found out that as long we open it once, everything works fine. We can call several methods several times. However, if the channel is closed or just set to a new instance, the Login() (which happens to be required for first step prior to using the service), times out.
To make the matters even more mysterious, this only happens on our production server. If I run the same project locally, I am able to login many times as I want. Consuming the methods inside a web browser (even on a code-behind ASPX page) do not have this problem even with the production server. ONLY when it's a .NET client trying to open a client channel against the production server, do we have this problem.
We are not even sure where to start looking. Any advices would be greatly appreciated.
UPDATE:
As per #Rene's suggestion, we turned on logging on both sides. From client's log, there is a record of error which is basically the same timeout error we already got via the exception. Nothing meaningful. On the server's logs, there are records of service methods being invoked successfully even after 2nd login() and from server's POV, the request is served.
Additionally, I discovered that I could not even reproduce this issue on my machine using same test project to reproduce this problem. This reproduces on my developer's machine. I verified that we were at same version of .NET framework and Visual Studio. It has to be surely a client-side problem. What could be it?
In case anyone else is looking for answer, we finally found it -- the issue is due to the need to set on client's side System.Net.ServicePointManager.DefaultConnectionLimit to some higher value. The default value is 2 but in reality this allows only one proxy to be created and be usable. Setting it to 3 would allow 2 proxies to be created & be used.

Error 403 on SECOND postback of the same form (and various other situations)

we recently migrated our application (IIS Server + DB Server) to AWS and also modified the network architecture a little bit. The entry point of the system is an Astaro Firewall (we use the AWS AMI) which also host the SSL certificate of the web server. Everything related to the firewall has been done by a vendor and we only have some read-only privileges.
We are getting 403 errors in a few situations but I will explain one, as they all may be related.
We got a form which query the database and return a report in HTML format (this report also have some checkbox to do updates). The first time the form is submitted, we always get the report back. If we wanna post the form again, updated with new data, it crash, returning error 403. We noted that it doesn't crash when the first results returned a very low number of rows (or none).
Looking at the details of the POSTs in Developer Tools, what seems to be the only difference between a working and 403 error reply is the size of the data posted. The second post is always bigger because it contains the data of the first report (as the page have also other option to checkbox the rows).
Also, looking at the IIS logs we don't see any traces of the POST that crash. Nothing at all.
This problem happen only in production. In dev environment it's all working flawlessly. The only difference is that the production have the firewall/ssl, while development is all open. This is why we think it may be related to SSL.
The vendor is not the most helpful, we are looking for help to pinpoint the issue and trying to take the situation in our hands.
Any input appreciated.

When is load for IIS7 too much?

At a customer of ours, candidates take tests with our software. If their test is finished, some calculations are done on the server. Now, sometimes, 200 candidates can end their test at the same time, so 200 calculations are done concurrent. The calculations all seem to go fine, but some calls to the IIS7 server get back a http error...
In Flex, this is the error:
code = "NetConnection.Call.Failed"
description = "HTTP: Status 200"
details = "http://servername/weborb.aspx"
level = "error"
Isn't Status 200 OK? So what's wrong here? Is it even a IIS7 problem? Of the 200 candidates 20 got this message. When restarting their test, everything worked well.
I have found this on the subject, but I wonder if this has anything to do with my problem (next week our customer will do some stresstests and I'll already asked them to test test if solution in this post works).
Some questions:
Can it be that IIS7 blocks certain http calls when load is to much?
How can you know that IIS7 blocked those calls because of too much load?
Is it possible to configure these things?
Technically, in the future I would like to queue the calculations, but for now, there isn't time nor budget for that.
Application: Flex, WebORB, ASP.NET, IIS7 en SQLSERVER2008. Server is Windows Server 2008.
This problem seems very familiar to me. We have a bunch of flex widgets which are connected to one server-side and sometimes it also returns "Netconnection.Call.Failed". For us, it seems that the IIS(and MSSql behind) cannot process all the requests in time, hence some of them are timed out.
Try to check how much time each request/all requests take, then check your timeout setting.
There are plenty of things you can do to fine tune the performance of both your server and IIS.
To answer your questions:
A maximum concurrent connections limit (plus other settings) in IIS 7 can be configured by selecting your website in IIS Manager and selecting 'Advanced Settings' in the Actions Pane on the right. Though by default this is a number much higher than 200.
Looking in the IIS log files, specifically the return status codes can give you an indication of what went wrong. Equally the Windows event log should also tell you of any exceptions that have occurred.
I suggest you turn on load balancing between instances of IIS, or consider using nginx for load balancing.
also set the limit of 200 User higher. Since in IIS, each user connect to your application is count as 1 instance of user, at some point you will use up 200 user slot. This is the default setting and you can set it to much higher number.
Also set your time out to a higher number.
Also look at Comet if you trying to call consistent result like live data (stock, weather, chat, shoutbox)
Technically, in the future I would like to queue the calculations, but for now, there isn't time nor budget for that.
A queue isn't that hard to put together with a batch-processing script running off Windows' scheduled tasks. Just dump results into a SQL DB, or if you're really lazy, insert rows in SQL with a serialized array, then have them "come back" to see their results. "Please wait, your results are still processing."
It'd take you less time than waiting around on SO for a silver-bullet answer in my opinion.

Resources