I have a C# script that HTTP POSTs to another server with WebRequest. I'd like to test how my web application would respond if the other server became unresponsive. What is the best way to do this without having to change any of my application code or configuration? Will Fiddler be able to cause timeouts for requests coming from my local IIS?
You can achieve it using the "Simulate Modem Speeds" rule.
Rules -> Customize Rules
Find the line that says "oSession["response-trickle-delay"]" and change it. Set it to 10000. It should be enough to cause time out. Save the file.
Rules -> Performance -> Simulate Modem Speeds (should be checked).
Note: Use ipv4.fiddler instead of localhost.
Can be achieved on specific responses where you want to intercept the response as well as simulating latency by using "AutoResponder".
"Enable Latency" checkbox and right-clicking any rule to pick "Set Latency" you can then adjust latency in millisecond values.
The other part of your question is "How do I get my service to use Fiddler", which has an answer here: http://www.fiddler2.com/Fiddler/help/hookup.asp#Q-IIS
Related
I'm wondering if anyone else has had this issue with Azure Front Door and the Azure Web Application Firewall and has a solution.
The WAF is blocking simple GET requests to our ASP.NET web application. The rule that is being triggered is DefaultRuleSet-1.0-SQLI-942440 SQL Comment Sequence Detected.
The only place that I can find an sql comment sequence is in the .AspNet.ApplicationCookie as per this truncated example: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH. If I remove the 2 dashes '--' in the cookie value, the request successfully gets through the firewall. As soon as I add them back the request gets blocked by the same firewall rule.
It seems that I have 2 options. Disable the rule (or change it from Block to Log) which I don't want to do, or change the .AspNet.ApplicationCookie value to ensure that it does not contain any text that would trigger a firewall rule. The cookie is generated by the Microsoft.Owin.Security.Cookies library and I'm not sure if I can change how it is generated.
I ran into same problem as well.
If you have a look to the cookie value: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH there are two -- which is the potentially dangerous SQL command that can comment out your SQL command that you're going to query. An attacker may run their command instead of your command - after commenting out your query.
But, obviously, this cookie won't run any query on the SQL side and we are sure about that. So we can create rule exclusions that won't run specific conditions.
Go to your WAF > Click Managed Rules on the left blade > Click manage exclusions on the top > and click add
In your case, adding this rule would be fine:
Match variable: Request cookie name
Operator: Starts With
Selector: .AspNet.ApplicationCookie
However, I use Asp.Net Core 3.1 and I use Asp.Net Core Identity. I encountered other issues as well, such as __RequestVerificationToken.
Here is my full list of exclusions. I hope it helps.
PS I think there is a glitch at the moment. If you have an IP restriction on your environment, such as UAT, because of these exclusions Web Application Firewall is by-passing the IP restriction and your UAT site becomes open to the public even if you have still custom IP restriction rule on your WAF.
I ran into something similar and blogged about it here: Front Door incomplete first request.
To test this I created a web application and put it behind the Front Door service. In that test application I iterate over all the properties of the HttpContext.HttpRequest and print them out. As far as I can see right now, there are two properties that have differences between a direct request and a request through Front Door. Both the AcceptTypes and the UserLanguages property are empty for Front Door requests, while they are absolutely filled in when directly accessing the test application.
I’m not quite sure what the reason is for the first Front Door request to be different from a direct request. Is it a bug? Is it intentional and if so, why? Or is it because Front Door is developed using a framework that doesn’t support these properties, having them be empty when being forwarded?
Unfortunately I didn't find a solution to the issue, but to answer the question if anyone else is experiencing this: I did experience something similar.
Seems that the cookie got corrupted , as I was comparing the fields that existed before vs a healthy cookie, my guess is maybe somewhere in the content of the field it is being interpreted as a truncate sql statement and probably triggering the rule. Still to determine if this is true and/or what cause it.
I ran into this issue but the token was being passed through via the request query rather than via a cookie. In case it might help someone, for the specified host I had to allow via a custom rule doing a regex match on the RequestUri, using the following regex (taken from the original managed rule):
:\/\\\\*!?|\\\\*\/|[';]--|--[\\\\s\\\\r\\\\n\\\\v\\\\f]|--[^-]*?-|[^\\u0026-]#.*?[\\\\s\\\\r\\\\n\\\\v\\\\f]|;?\\\\x00
I'd like to ask a question regarding HTTP Conditional GET. Is this actually reliable to detect changes of a web resource?
I mean, if I write a program to check if the page content is changed by using HTTP Conditional GET, is it possible that the web server is misconfigured (or intentionally configured) to return there are no changes even though the contents of the HTML or XML (Restful) has changed?
(I'm referring to requesting a web page with a header "If-Modified-Since" as part of the GET request. So, is the modified date that comes back is always reliable?)
Of course it is possible. But the whole point of using a communication protocol, is that you trust that the other side is fulfilling it.
Usually, the situations like the one you mention are called "byzantine", because one of the ends is not following the protocol or failing, but cheating.
Yes, it is possible for a server to say that the content hasn't changed even though it has. It is still just code running on the server so it can do anything it wants.
It is quite easy to update the interface by sending jQuery ajax request and updating with new content. But I need something more specific.
I want to send the response to client without their having requested it and update the content when they have found something new on the server. No need to send an ajax request every time. When the server has new data it sends a response to every client.
Is there any way to do this using HTTP or some specific functionality inside the browser?
Websockets, Comet, HTTP long polling.
It has name server push (you can also find it under name Comet technology). Do search using these keywords and you will find bunch examples, tools and so on. No special protocol is required for that.
Aaah! You are trying to break the principles of the web :) You see if the web was pure MVC (model-view-controller) the 'server' could actually send messages to the client(s) and ask them to update. The issue is that the server could be load balanced and the same request could be sent to different servers. Now if you were to send a message back to the client you'll have to know who all are connected to the server. Let's say the site is quite popular and you have about 100,000 people connecting to it every day. You'll actually have to store the IPs of each of them to know where on the internet they are located and to be able to "push" them a message.
Caveats:
What if they are no longer browsing your website? You see currently there is no way to log out automatically if you close your browser. The server needs to check after a fixed timeout if you have logged out (or you send a new nonce with every response to prevent the server from doing that check)
What about a system restart/crash etc? You'd lose all the IPs that you were keeping track of and you are back to square one - you have people connected to you but until you receive new requests you can't really "send" them data when they may be expecting it as per your model.
Let's take an example of facebook's news feeds or "Most recent" link close to the top right - sometimes while you are browsing your wall you see the number next to most recent has gone up or a new 'feed' has come to the top of your wall post! It's the client sending periodic requests to the server to find out what was updated rather than the other way round
You see, it keeps it simple and restful. You may feel it's inefficient for the client to "poll" the server to pull the data and you'd prefer push, but the design of the server gets simplified :)
I suggest ajax-pulling is the best way to go - you are distributing computation to the client and keeping it simple (KIS principle :)
Of course you can get around it, the question is, is it worth it?
Hope this helps :)
RFC 6202 might be a good read.
At a customer of ours, candidates take tests with our software. If their test is finished, some calculations are done on the server. Now, sometimes, 200 candidates can end their test at the same time, so 200 calculations are done concurrent. The calculations all seem to go fine, but some calls to the IIS7 server get back a http error...
In Flex, this is the error:
code = "NetConnection.Call.Failed"
description = "HTTP: Status 200"
details = "http://servername/weborb.aspx"
level = "error"
Isn't Status 200 OK? So what's wrong here? Is it even a IIS7 problem? Of the 200 candidates 20 got this message. When restarting their test, everything worked well.
I have found this on the subject, but I wonder if this has anything to do with my problem (next week our customer will do some stresstests and I'll already asked them to test test if solution in this post works).
Some questions:
Can it be that IIS7 blocks certain http calls when load is to much?
How can you know that IIS7 blocked those calls because of too much load?
Is it possible to configure these things?
Technically, in the future I would like to queue the calculations, but for now, there isn't time nor budget for that.
Application: Flex, WebORB, ASP.NET, IIS7 en SQLSERVER2008. Server is Windows Server 2008.
This problem seems very familiar to me. We have a bunch of flex widgets which are connected to one server-side and sometimes it also returns "Netconnection.Call.Failed". For us, it seems that the IIS(and MSSql behind) cannot process all the requests in time, hence some of them are timed out.
Try to check how much time each request/all requests take, then check your timeout setting.
There are plenty of things you can do to fine tune the performance of both your server and IIS.
To answer your questions:
A maximum concurrent connections limit (plus other settings) in IIS 7 can be configured by selecting your website in IIS Manager and selecting 'Advanced Settings' in the Actions Pane on the right. Though by default this is a number much higher than 200.
Looking in the IIS log files, specifically the return status codes can give you an indication of what went wrong. Equally the Windows event log should also tell you of any exceptions that have occurred.
I suggest you turn on load balancing between instances of IIS, or consider using nginx for load balancing.
also set the limit of 200 User higher. Since in IIS, each user connect to your application is count as 1 instance of user, at some point you will use up 200 user slot. This is the default setting and you can set it to much higher number.
Also set your time out to a higher number.
Also look at Comet if you trying to call consistent result like live data (stock, weather, chat, shoutbox)
Technically, in the future I would like to queue the calculations, but for now, there isn't time nor budget for that.
A queue isn't that hard to put together with a batch-processing script running off Windows' scheduled tasks. Just dump results into a SQL DB, or if you're really lazy, insert rows in SQL with a serialized array, then have them "come back" to see their results. "Please wait, your results are still processing."
It'd take you less time than waiting around on SO for a silver-bullet answer in my opinion.
In asp.net application, how its possible to download all png,css JavaScript and other resources parallel.
Because i am monitoring using Fiddler and found that content is downloaded one after another.
That is actually more of a browser (client) behaviour in accordance to the specification in HTTP 1.1. The guideline is to limit simultaneous downloads to two per hostname.
http://www.yuiblog.com/blog/2007/04/11/performance-research-part-4/
While you may be able to alter your browser's settings to download more per hostname, that is only your machine and not that of others' in the Internet wilderness. One way to trick clients in downloading more simulatenously is to designate your web resources into different hostnames, like images stored in http://images.yoursite.com. But you may wanna to test this and balance it out, as per the article's suggestion.
You can try AJAX for that as usually there are 5 allowed server/client http connections you could theoretically use them all at once.
However I guess you will take little advantage of this, unless you have really big (or many) css and javascript files.
Not sure if this will work on images or other files.