We have a simple application in ASP.NET Core which calls a website and returns the content. The Controller method looks like this:
[HttpGet("test/get")]
public ActionResult<string> TestGet()
{
var client = new WebClient
{
BaseAddress = "http://v-dev-a"
};
return client.DownloadString("/");
}
The URL which we call is just the default page of an IIS. I am using Apache JMeter to test 1000 requests in 10 seconds. I have always the same issue, after about 300-400 requests it gets stuck for a few minutes and nothing works. The appplication which holds the controller is completely frozen.
In the performance monitor (MMC) I see that the connection are at 100%.
I tried the same code with ASP.NET 4.7.2 and it runs without any issues.
I also read about that the dispose of the WebClient does not work and I should make it static. See here
Also the deactivation of the KeepAlive did not help:
public class QPWebClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
var request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
((HttpWebRequest)request).KeepAlive = false;
}
return request;
}
}
The HttpClient hast the same issue, it does not change anything
With dependency injection like recommended here there is an exception throw that the web client can't handle more request at the same time.
Another unsuccessful try was to change ConnectionLimit and SetTcpKeepAlive in ServicePoint
So I am out of ideas how to solve this issue. Last idea is to move the application to ASP.NET. Does anyone of you have an idea or faced already the same issue?
Related
I am using ASP.NET 4.7 and MVC5 with C# with IIS Express locally and published to Azure App Services.
I want to add something like:
Response.AppendToLog("XXXXX Original IP = 12.12.12.12 XXXXX");
Which adds an Original IP address to the request string in the "request" column in the web server log.
If I add this to a specific "get" Action this works fine. However I do not want to add this code to every Action. Is it possible to place it more centrally such that it gets executed on every "Get" / Request. This may be a simple question, but the answer alludes me at present
Thanks for any wisdom.
EDIT: Is this via Custom Action Filters?
if (filterContext.HttpContext.Request.HttpMethod=="GET")
{
Response.AppendToLog... //I know this will not work as Response not known.
}
You almost know the answer. Try handling OnActionExecuted that gets you the Response.
public class CustomActionFilter : ActionFilterAttribute, IActionFilter
{
void IActionFilter.OnActionExecuting(ActionExecutingContext filterContext)
{
if(filterContext.HttpContext.Request.Method == HttpMethods.Get)
{
}
}
void IActionFilter.OnActionExecuted(ActionExecutedContext context)
{
var response = context.HttpContext.Response;
}
}
My solution to write out text:
filterContext.RequestContext.HttpContext.Response.AppendToLog("OrigIP");
So, I'm trying to create a sample where there are the following components/features:
A hangfire server OWIN self-hosted from a Windows Service
SignalR notifications when jobs are completed
Github Project
I can get the tasks queued and performed, but I'm having a hard time sorting out how to then notify the clients (all currently, just until I get it working well) of when the task/job is completed.
My current issue is that I want the SignalR hub to be located in the "core" library SampleCore, but I don't see how to "register it" when starting the webapp SampleWeb. One way I've gotten around that is to create a hub class NotificationHubProxy that inherits the actual hub and that works fine for simple stuff (sending messages from one client to all).
In NotifyTaskComplete, I believe I can get the hub context and then send the message like so:
private void NotifyTaskComplete(int taskId)
{
try
{
var hubContext = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();
if (hubContext != null)
{
hubContext.Clients.All.sendMessage(string.Format("Task {0} completed.", taskId));
}
}
catch (Exception ex)
{
}
}
BUT, I can't do that if NotificationHubProxy is the class being used as it's part of the SampleWeb library and referencing it from SampleCore would lead to a circular reference.
I know the major issue is the hub in the external assembly, but I can't for the life of me find a relevant sample that's using SignalR or MVC5 or setup in this particular way.
Any ideas?
So, the solution was to do the following two things:
I had to use the SignalR .NET client from the SampleCore assembly to create a HubConnection, to create a HubProxy to "NotificationHub" and use that to Invoke the "SendMessage" method - like so:
private void NotifyTaskComplete(string hostUrl, int taskId)
{
var hubConnection = new HubConnection(hostUrl);
var hub = hubConnection.CreateHubProxy("NotificationHub");
hubConnection.Start().Wait();
hub.Invoke("SendMessage", taskId.ToString()).Wait();
}
BUT, as part of creating that HubConnection - I needed to know the url to the OWIN instance. I decided to pass that a parameter to the task, retrieving it like:
private string GetHostAddress()
{
var request = this.HttpContext.Request;
return string.Format("{0}://{1}", request.Url.Scheme, request.Url.Authority);
}
The solution to having a Hub located in an external assembly is that the assembly needs to be loaded before the SignalR routing is setup, like so:
AppDomain.CurrentDomain.Load(typeof(SampleCore.NotificationHub).Assembly.FullName);
app.MapSignalR();
This solution for this part came from here.
We have a asp.net + MSSQL Server DB web based application with approx 100 users. Its hosted on our intranet on IIS7.0. We are using Forms Authentication
We need to keep the users (anyone who is logged in ) to be logged in for 20 Hours exactly. Means no one should be kicked out ( session time out ) of the application before 20 Hours even if he is idle.
We tried many of the suggested approaches like web config changes etc but nothing is working.
Our main question is : Will we have to do some code changes to keep the user sessions alive for this ( or any duration). Can someone suggest or point us to a solution?
Whenever you make a request to the server the session timeout resets. So you can just make an ajax call to an empty HTTP handler on the server, but make sure the handler's cache is disabled, otherwise the browser will cache your handler and won't make a new request.
KeepSessionAlive.ashx.cs
public class KeepSessionAlive : IHttpHandler, IRequiresSessionState
{
public void ProcessRequest(HttpContext context)
{
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
context.Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1));
context.Response.Cache.SetNoStore();
context.Response.Cache.SetNoServerCaching();
}
}
.JS:
window.onload = function () {
setInterval("KeepSessionAlive()", 60000)
}
function KeepSessionAlive() {
url = "/KeepSessionAlive.ashx?";
var xmlHttp = new XMLHttpRequest();
xmlHttp.open("GET", url, true);
xmlHttp.send();
}
I am currently investigating the possibility of using a Java Web Service (as described by the Info*Engine documentation of Windchill) in order to retrieve information regarding parts. I am using Windchill version 10.1.
I have successfully deployed a web service, which I consume in a .Net application. Calls which do not try to access Windchill information complete successfully. However, when trying to retrieve part information, I get a wt.method.AuthenticationException.
Here is the code that runs within the webService (The web service method simply calls this method)
public static String GetOnePart(String partNumber) throws WTException
{
WTPart part=null;
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
try {
QuerySpec qspec= new QuerySpec(WTPart.class);
qspec.appendWhere(new SearchCondition(WTPart.class,WTPart.NUMBER,SearchCondition.LIKE,partNumber),new int[]{0,1});
// This fails.
QueryResult qr=PersistenceHelper.manager.find((StatementSpec)qspec);
while(qr.hasMoreElements())
{
part=(WTPart) qr.nextElement();
partName = part.getName();
}
} catch (AuthenticationException e) {
// Exception caught here.
partName = e.toString();
}
return partName;
}
This code works in a command line application deployed on the server, but fails with a wt.method.AuthenticationException when performed from within the web service. I feel it fails because the use of RemoteMethodServer is not what I should be doing since the web service is within the MethodServer.
Anyhow, if anyone knows how to do this, it would be awesome.
A bonus question would be how to log from within the web service, and how to configure this logging.
Thank you.
You don't need to authenticate on the server side with this code
RemoteMethodServer server = RemoteMethodServer.getDefault();
server.setUserName("theUsername");
server.setPassword("thePassword");
If you have followed the documentation (windchill help center), your web service should be something annotated with #WebServices and #WebMethod(operationName="getOnePart") and inherit com.ptc.jws.servlet.JaxWsService
Also you have to take care to the policy used during deployment.
The default ant script is configured with
security.policy=userNameAuthSymmetricKeys
So you need to manage it when you consume your ws with .Net.
For logging events, you just need to call the log4j logger instantiated by default with $log.debug("Hello")
You can't pre-authenticate server side.
You can write the auth into your client tho. Not sure what the .Net equivilent is, but this works for Java clients:
private static final String USERNAME = "admin";
private static final String PASSWORD = "password";
static {
java.net.Authenticator.setDefault(new java.net.Authenticator() {
#Override
protected java.net.PasswordAuthentication getPasswordAuthentication() {
return new java.net.PasswordAuthentication(USERNAME, PASSWORD.toCharArray());
}
});
}
I am just about to launch my ASP.NET MVC3 web app to production, however, as a complex app, it takes a LONG time to start up. Obviously, I don't want my users waiting over a minute for their first request to go through after the AppPool has timed out.
From my research, i've found that there are two ways to combat this:
Run a worker role or other process - which poll's the website every 19 minutes preventing the warm up.
Change the timeout from the default 20 minutes - To something much larger.
As Solution 2 seems like the better idea, i just wondered what the disadvantages would be of this, will I run out of memory etc.?
Thanks.
Could you use the auto-start feature of IIS? There is a post here that presents this idea.
You'd have IIS 7.5 and Win2k8 R2 with Azure OS family 2. You'd just need to be able to script/automate any setup steps and configuration.
I do this with a background thread that requests a keepalive URL every 15 minutes. Not only does this keep the app from going idle, but it also warms up the app right away anytime the web role or virtual machine restarts or is rebuilt.
This is all possible because Web Roles really are just Worker Roles that also do IIS stuff. So you can still use all the standard Worker Role startup hooks in a Web Role.
I got the idea from this blog post but tweaked the code to do a few extra warmup tasks.
First, I have a class that inherits from RoleEntryPoint (it does some other things besides this warm up task and I removed them for simplicity):
public class WebRole : RoleEntryPoint
{
// other unrelated member variables appear here...
private WarmUp _warmUp;
public override bool OnStart()
{
// other startup stuff appears here...
_warmUp = new WarmUp();
_warmUp.Start();
return base.OnStart();
}
}
All the actual warm up logic is in this WarmUp class. When it first runs it hits a handful of URLs on the local instance IP address (vs the public, load balanced hostname) to get things in memory so that the first people to use it get the fastest possible response time. Then, it loops and hits a single keepalive URL (again on the local role instance) that doesn't do any work and just serves to make sure that IIS doesn't shut down the application pool as idle.
public class WarmUp
{
private Thread worker;
public void Start()
{
worker = new Thread(new ThreadStart(Run));
worker.IsBackground = true;
worker.Start();
}
private void Run()
{
var endpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["http"]; // "http" has to match the endpointName in your ServiceDefinition.csdef file.
var pages = new string[]
{
"/",
"/help",
"/signin",
"/register",
"/faqs"
};
foreach (var page in pages)
{
try
{
var address = String.Format("{0}://{1}:{2}{3}",
endpoint.Protocol,
endpoint.IPEndpoint.Address,
endpoint.IPEndpoint.Port,
page);
var webClient = new WebClient();
webClient.DownloadString(address);
Debug.WriteLine(string.Format("Warmed {0}", address));
}
catch (Exception ex)
{
Debug.WriteLine(ex.ToString());
}
}
var keepalive = String.Format("{0}://{1}:{2}{3}",
endpoint.Protocol,
endpoint.IPEndpoint.Address,
endpoint.IPEndpoint.Port,
"/keepalive");
while (true)
{
try
{
var webClient = new WebClient();
webClient.DownloadString(keepalive);
Debug.WriteLine(string.Format("Pinged {0}", keepalive));
}
catch (Exception ex)
{
//absorb
}
Thread.Sleep(900000); // 15 minutes
}
}
}
Personally I'd change the timeout, but both should work: effectively they would both have the same effect of preventing the worker processes from shutting down.
I believe the timeout is there to avoid IIS retaining resources that aren't needed for servers with lots of Web sites that are lightly used. Given that heavily used sites (like this one!) don't shut down their worker processes I don't think you'll see any memory issues.