I have an application that is a load tester, and uses async web api to send lots of traffic to a test server. The application has 2 GUI incarnations: one is a web app which is controlled via standard .aspx forms. Another is a WPF forms application. The http code however is the same in both cases so I'm confused as to why the performance difference.
In the WPF application there is about 30 seconds before GetRequestStreamCallback is called by the CLR. In the web application it is more like 40ms. I suspect this has something to do with the threading model in the 2 applications (there are lots of threads not shown here). Since the GetRequestStreamCallback is a callback I have no influence over the priority it is called.
Any insight is appreciated,
Aaron
public class PendingRequestWrapper
{
public HttpWebRequest request;
PendingRequestWraqpper(HttpWebRequest req) {request = req;}
}
public class Poster
{
public static void SendPost(string url) {
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create(url);
request.Method = "POST";
// more header setup ...
PendingRequestWrapper = new PendingRequestWrapper(request);
wrap.request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), wrap);
}
private static void GetRequestStreamCallback(IAsyncResult asynchronousResult)
{
PendingRequestWrapper wrap = asynchronousResult.AsyncState as PendingRequestWrapper;
try {
// End the operation
System.Diagnostics.Debug.Writeln("Received req stream for " + wrap.request.RequestUri.ToString());
Stream postStream = wrap.request.EndGetRequestStream(asynchronousResult);
} catch(Exception e)
{
// ...
}
// Use the stream
}
}
The reason WPF has slower asp.net web performance than an IIS web application by default is that the default connection limit per host is 2. Whereas in IIS application it defaults to 32k. The fix is to go:
ServicePoint myPoint = ServicePointManager.FindServicePoint(new Uri("http://example.com"));
// WPF application needs this!
myPoint.ConnectionLimit = 10000;
This is probably irrelevant to all applications except for load-testing-types that open many connections to the same host.
Related
I'm using ASP.NET Web API 2.2 along with Owin to build a web service and I observed each call to the controller will be served by a separate thread running on the server side, that's nothing surprising and is the behavior I expected.
One issue I'm having now is that because the server side actions are very memory intense so if more than X number of users are calling in at the same time there is a good chance the server code will throw an out-of-memory exception.
Is it possible to set a global "maximum action count" so that Web Api can queue (not reject) the incoming calls and only proceed when there's an empty slot.
I can't run the web service in 64bit because some of the referenced libraries won't support that.
I also looked at libraries like https://github.com/stefanprodan/WebApiThrottle but it can only throttle based on the frequency of calls.
Thanks
You could add a piece of OwinMiddleware along these lines (influenced by the WebApiThrottle you linked to):
public class MaxConccurrentMiddleware : OwinMiddleware
{
private readonly int maxConcurrentRequests;
private int currentRequestCount;
public MaxConccurrentMiddleware(int maxConcurrentRequests)
{
this.maxConcurrentRequests = maxConcurrentRequests;
}
public override async Task Invoke(IOwinContext context)
{
try
{
if (Interlocked.Increment(ref currentRequestCount) > maxConcurrentRequests)
{
var response = context.Response;
response.OnSendingHeaders(state =>
{
var resp = (OwinResponse)state;
resp.StatusCode = 429; // 429 Too Many Requests
}, response);
return Task.FromResult(0);
}
await Next.Invoke(context);
}
finally
{
Interlocked.Decrement(ref currentRequestCount);
}
}
}
I work on a ASP.NET 5 (ASP.NET vNext) website. I use SignalR server (1.0 Beta 3) for some processing. It it is correctly set up because I can successfully invoke server methods from a Javascript browser client.
But when a I use .NET client (.NET 4.5 with SignalR 2.2.0), the method invoke fail with the generic "error 500".
I have downloaded both SignalR server and client sources to be able to debug them. I have seen that DefaultHttpClient.Post() client method is called with valid "postData" parameter, but the server PersistentConnectionMiddleware.Invoke() method has a http context without any "Form" value inside the request. And it makes SignalR server side failing in the ForeverTransport.ProcessSendRequest() method.
The post form seems to be forgotten during the transfer between the client and the server (I use the default IIS Express server).
Any idea? Thank you...
I saw the issue you opened and have committed a fix/workaround.
At the moment it doesn't look like SignalR 3 works with the client due to the fact it expects all requests to be form encoded. The workaround is to update ProcessSendRequest() so it can get the non-form-encoded data from the 2.2 .NET client;
protected virtual async Task ProcessSendRequest()
{
var data = await GetData().PreserveCulture();
if (Received != null)
{
await Received(data).PreserveCulture();
}
}
private async Task<string> GetData()
{
if (Context.Request.HasFormContentType)
{
var form = await Context.Request.ReadFormAsync().PreserveCulture();
return form["data"];
}
else
{
var stream = new System.IO.StreamReader(Context.Request.Body);
var output = await stream.ReadToEndAsync().PreserveCulture();
var decoded = UrlDecoder.UrlDecode(output);
return decoded.Replace("data=", "");
}
}
Azure has a fantastic ability to roll updates so that the entire system is not offline all at once. However, when Azure updates my web roles, the AppDomains are understandably recycled. Sometimes the ASP.NET startup code can take over a minute to finish initializing, and that's only once a user hits the new server.
Can I get Azure to start the AppDomain for the site and wait for it to come up before moving on to the next server? Perhaps using some magic in the OnStart method of WebRole?
See Azure Autoscale Restarts Running Instances which includes the following code:
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
IPHostEntry ipEntry = Dns.GetHostEntry(Dns.GetHostName());
string ip = null;
foreach (IPAddress ipaddress in ipEntry.AddressList)
{
if (ipaddress.AddressFamily.ToString() == "InterNetwork")
{
ip = ipaddress.ToString();
}
}
string urlToPing = "http://" + ip;
HttpWebRequest req = HttpWebRequest.Create(urlToPing) as HttpWebRequest;
WebResponse resp = req.GetResponse();
return base.OnStart();
}
}
I have a working web service which On load contacts different websites and scrapes relevant information from them. As the requirements grew so did the number of httpwebrequests.
Right now I'm not using any asynchronous requests in the web service - Which means that ASP.net renders one request at a time. This obviously became a burden as one request to the webservice itself can take up to 2 minutes to complete.
Is there a way to convert all these httpwebreqeusts inside the webservice to multi-threaded?
What would be the best way to achieve this?
Thanks!
If you are working with .Net V4+, you can use the Parallel library or task library which allow easily to do such things.
If you call all your web services using the same way (assuming all web services respects the same WSDL, just differing urls, you can use something like this) :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Text.RegularExpressions;
namespace ConsoleApplication2
{
class Program
{
private const string StartUrl = #"http://blog.hand-net.com";
private static void Main()
{
var content = DownloadAsString(StartUrl);
// The "AsParallel" here is the key
var result = ExtractUrls(content).AsParallel().Select(
link =>
{
Console.WriteLine("... Fetching {0} started", link);
var req = WebRequest.CreateDefault(new Uri(link));
var resp = req.GetResponse();
var info = new { Link = link, Size = resp.ContentLength};
resp.Close();
return info;
}
);
foreach (var linkInfo in result)
{
Console.WriteLine("Link : {0}", linkInfo.Link);
Console.WriteLine("Size : {0}", linkInfo.Size);
}
}
private static string DownloadAsString(string url)
{
using (var wc = new WebClient())
{
return wc.DownloadString(url);
}
}
private static IEnumerable<string> ExtractUrls(string content)
{
var regEx = new Regex(#"<a\s+href=""(?<url>.*?)""");
var matches = regEx.Matches(content);
return matches.Cast<Match>().Select(m => m.Groups["url"].Value);
}
}
}
This small program first download an html page, then extract all href. This produces an array of remote files.
the AsParralel here allow to run the content of the select in a parallel way.
This code does not have error handling, cancellation feature but illustrate the AsParallel method.
If you can't call all your webservices in the same way, you can also use something like this :
Task.WaitAll(
Task.Factory.StartNew(()=>GetDataFromWebServiceA()),
Task.Factory.StartNew(()=>GetDataFromWebServiceB()),
Task.Factory.StartNew(()=>GetDataFromWebServiceC()),
Task.Factory.StartNew(()=>GetDataFromWebServiceD()),
);
This code will add 4 tasks, that will be run "when possible". The WaitAll method will simply wait for all task to be completed before returning.
By when possible I mean when a slot in the thread pool is free. When using the Task library, there is by default one thread pool per processor core. If you have 100 tasks, the 100 taks will be processed by 4 worker threads on a 4 core computer.
I'm trying to set up a WCF service hosted in IIS in ASP.Net compatibility mode that is protected via Forms Authentication and accessed via a .Net User Control in IE. (see Secure IIS hosted WCF service for access via IE hosted user control).
The User Control in IE is needed because it uses a specific third-party control for which there doesn't exist anything comparable in Silverlight or AJAX.
So I need the UserControl to set the authentication and session id cookies in the http request headers before it accesses the WCF service. My approach is to set up a Message inspector that does this.
So I've defined the Message Inspector:
public class CookieInspector : IClientMessageInspector {
public void AfterReceiveReply(ref Message reply, object correlationState) {
}
public object BeforeSendRequest(
ref Message request,
IClientChannel channel) {
HttpRequestMessageProperty messageProperty;
if (request.Properties.ContainsKey(HttpRequestMessageProperty.Name)) {
messageProperty = (HttpRequestMessageProperty) request.Properties[
HttpRequestMessageProperty.Name
];
}
else {
messageProperty = new HttpRequestMessageProperty();
request.Properties.Add(
HttpRequestMessageProperty.Name,
messageProperty
);
}
// Set test headers for now...
messageProperty.Headers.Add(HttpRequestHeader.Cookie, "Bob=Great");
messageProperty.Headers.Add("x-chris", "Beard");
return null;
}
}
and an Endpoint behaviour:
public class CookieBehavior : IEndpointBehavior {
public void AddBindingParameters(
ServiceEndpoint endpoint,
BindingParameterCollection bindingParameters) {
}
public void ApplyClientBehavior(
ServiceEndpoint endpoint,
ClientRuntime clientRuntime) {
clientRuntime.MessageInspectors.Add(new CookieInspector());
}
public void ApplyDispatchBehavior(
ServiceEndpoint endpoint,
EndpointDispatcher endpointDispatcher) {
}
public void Validate(ServiceEndpoint endpoint) {
}
}
and I configure and create my channel and WCF client in code:
var ea = new EndpointAddress("http://.../MyService.svc");
// EDIT: Http cookies can't be set with WSHttpBinding :-(
// var binding = WSHttpBinding();
var binding = new BasicHttpBinding();
// Disable automatically managed cookies (which enables user cookies)
binding.AllowCookies = false;
binding.MaxReceivedMessageSize = 5000000;
binding.ReaderQuotas.MaxStringContentLength = 5000000;
var cf = new ChannelFactory<ITranslationServices>(binding, ea);
cf.Endpoint.Behaviors.Add(new CookieBehavior());
ITranslationServices service = cf.CreateChannel();
However when I look at my request with Fiddler, the http header and cookie aren't set, and I have no clue why. I've read various articles on the Net, Stackoverflow etc that basically say that it should work, but it doesn't. Either I'm missing something obvious, or there's a bug in WCF or something else?
Well I figured it out, if I use a basicHttpBinding instead of a WSHttpBinding it works. No idea why though...
WSHttpBinding may be composed of more than one physical message to one logical message. So when successive physical calls are made, they may not be carrying the cookie appropriately