How can I correctly measure ASP.NET Core request durations? - asp.net

Given the following most basic of ASP.NET Core applications (note the Thread.Sleep):
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.Configure(appBuilder =>
appBuilder.Run(async context =>
{
var stopwatch = Stopwatch.StartNew();
Thread.Sleep(1000);
await context.Response.WriteAsync($"Finished in {stopwatch.ElapsedMilliseconds} milliseconds.");
}));
});
}
And the following appsettings.json
{
"Logging": {
"LogLevel": {
"Default": "None",
"Microsoft.AspNetCore.Hosting.Diagnostics" : "Information",
}
},
"AllowedHosts": "*"
}
If I run an even moderate load test (100 requests, using bombardier in my case) I see latency of around 5 seconds.
~/go/bin/bombardier http://localhost:5000 -l -n 100 -t 60s
Bombarding http://localhost:51568 with 100 request(s) using 125 connection(s)
100 / 100 [=================================================================================================================================] 100.00% 16/s 6s
Done!
Statistics Avg Stdev Max
Reqs/sec 19.46 250.28 4086.58
Latency 5.21s 366.21ms 6.05s
Latency Distribution
50% 5.05s
75% 5.05s
90% 6.04s
95% 6.05s
99% 6.05s
HTTP codes:
1xx - 0, 2xx - 100, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 3.31KB/s
However, all I see in the logs are
info: Microsoft.AspNetCore.Hosting.Diagnostics[2] Request finished in
1003.3658ms 200
Clearly the requests are taking more than 1 second. I believe the unaccounted 4 seconds are when the request is queued on the ThreadPool.
So my question is how can I measure this latency from inside my application?

I ran your application in my environment, and the ASP.NET logs very similar to yours:
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1022.4689ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/favicon.ico
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1004.1694ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1003.4582ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/favicon.ico
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1004.3703ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1003.3915ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/favicon.ico
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1004.3106ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1003.122ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1017.028ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1004.2742ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1006.5832ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1004.9214ms 200
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/1.1 GET http://localhost:5000/
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished in 1012.4532ms 200
As for bombardier, I got below output:
bombardier-windows-amd64.exe http://localhost:5000 -l -n 100 -t 60s
Bombarding http://localhost:5000 with 100 request(s) using 125 connection(s)
100 / 100 [==========================================================================================] 100.00% 11/s 8s
Done!
Statistics Avg Stdev Max
Reqs/sec 11.29 99.10 1303.09
Latency 5.78s 1.42s 8.78s
Latency Distribution
50% 5.17s
75% 7.79s
90% 7.88s
95% 8.24s
99% 8.34s
HTTP codes:
1xx - 0, 2xx - 100, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 2.27KB/s
And below is Chrome Dev tools network output:
Also I test it with cURL (note that I had to apply echo [%date%, %time%] second time after the curl command manually, better accuracy can be done by doing it in .bat file
but overall the output confirms that the request took ~ 1100 ms:
C:\curl\bin>echo [%date%, %time%] && curl http://localhost:5000/
[Sun 07/12/2020, 13:12:35.61]
Finished in 1001 milliseconds.
C:\curl\bin\>echo [%date%, %time%]
[Sun 07/12/2020, 13:12:37.45]
So based on all above, it seems that bombardier output differ than what other tools reports, hence we might misunderstood the meaning of its output latency! I made little change to the command by letting it handle the 100 requests using 10 connections only instead of default 125 connections, and the output was:
bombardier-windows-amd64.exe -c10 http://localhost:5000 -l -n 100 -t 60s
Bombarding http://localhost:5000 with 100 request(s) using 10 connection(s)
100 / 100 [==========================================================================================] 100.00% 8/s 11s
Done!
Statistics Avg Stdev Max
Reqs/sec 9.07 26.73 211.08
Latency 1.06s 179.57ms 2.04s
Latency Distribution
50% 1.01s
75% 1.02s
90% 1.07s
95% 1.19s
99% 1.86s
HTTP codes:
1xx - 0, 2xx - 100, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 1.79KB/s
Based on all above, I confirm that the single request taking ~ 1 second. as for bulk requests benchmarks, please try Postman otherwise we need to dig deeper to understand whatbombardier Latency means exactly and how its calculated.
Update
I made small console tool that fire HttpClient in Bulk and it confirms the ~ 1 second response time, also I tried two benchmarking tools from awesome-http-benchmark:
baton.exe -u http://localhost:5000 -c 10 -r 100
Configuring to send GET requests to: http://localhost:5000
Generating the requests...
Finished generating the requests
Sending the requests to the server...
Finished sending the requests
Processing the results...
====================== Results ======================
Total requests: 100
Time taken to complete requests: 10.2670832s
Requests per second: 10
===================== Breakdown =====================
Number of connection errors: 0
Number of 1xx responses: 0
Number of 2xx responses: 100
Number of 3xx responses: 0
Number of 4xx responses: 0
Number of 5xx responses: 0
=====================================================
cassowary run -u http://localhost:5000 -c 10 -n 100
Starting Load Test with 100 requests using 10 concurrent users
100% |████████████████████████████████████████| [10s:0s] 10.2299727s
TCP Connect.....................: Avg/mean=1.90ms Median=2.00ms p(95)=2.00ms
Server Processing...............: Avg/mean=1014.94ms Median=1008.00ms p(95)=1093.00ms
Content Transfer................: Avg/mean=0.17ms Median=0.00ms p(95)=1.00ms
Summary:
Total Req.......................: 100
Failed Req......................: 0
DNS Lookup......................: 5.00ms
Req/s...........................: 9.78
Finally I used nginx on port 2020 as rev proxy in front of kestrel to see bombardier output:
bombardier-windows-amd64.exe http://localhost:2020 -l -n 100 -t 60s
Bombarding http://localhost:2020 with 100 request(s) using 125 connection(s)
100 / 100 [==========================================================================================] 100.00% 9/s 10s
Done!
Statistics Avg Stdev Max
Reqs/sec 11.76 128.07 2002.66
Latency 9.08s 761.43ms 10.04s
Latency Distribution
50% 9.06s
75% 9.07s
90% 9.07s
95% 10.02s
99% 10.04s
HTTP codes:
1xx - 0, 2xx - 95, 3xx - 0, 4xx - 0, 5xx - 5
others - 0
Throughput: 2.51KB/s
As you can see, even with nginx it shows 9 seconds latency! that's should conclude an issue with Bombarding latency definition/calcs.
Below is nginx config:
server {
listen 2020;
location / {
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_redirect off;
}
Bonus
If you want to hack the performance of thread.sleep to be similar as await Task.Delay then change Main to:
public static void Main(string[] args)
{
ThreadPool.SetMinThreads(130, 130);//Don't Use in Production.
CreateHostBuilder(args).Build().Run();
}

Related

nginx taking too long to respond

I have an nginx used mainly as a reverse proxy for a couple of upstream services. This nginx has a simple endpoint used for health checks:
location /ping { return 200 '{"ping":"successful"}'; }
The problem I'm having is that this ping takes too long to be responded:
$ cat /proc/loadavg; date ; httpstat localhost/ping?foo=bar
2.93 1.98 1.94 8/433 16725
Thu Jul 15 15:25:08 UTC 2021
Connected to 127.0.0.1:80 from 127.0.0.1:42946
HTTP/1.1 200 OK
Date: Thu, 15 Jul 2021 15:26:24 GMT
X-Request-ID: b8d276b0b3828113cfee3bf2daa01293
DNS Lookup TCP Connection Server Processing Content Transfer
[ 4ms | 0ms | 76032ms | 0ms ]
| | | |
namelookup:4ms | | |
connect:4ms | |
starttransfer:76036ms |
total:76036ms
That ^ is telling me that the average load is low at the time of the request (2.93 the 1m average load for an 8-core server is ok)
Curl/httpstat initiated the request at 15:25:08 and response was obtained 15:26:24.
Connection was stablished fast, request sent, then it took 76s for the server to respond.
If I look at the access log for this ping I see "req_time":"0.000" (this is the $request_time variable).
{"t":"2021-07-15T15:26:24+00:00","id":"b8d276b0b3828113cfee3bf2daa01293","cid":"18581172","pid":"13631","host":"localhost","req":"GET /ping?foo=bar HTTP/1.1","scheme":"","status":"200","req_time":"0.000","body_sent":"21","bytes_sent":"373","content_length":"","request_length":"85","stats":"","upstream":{"status":"","sent":"","received":"","addr":"","conn_time":"","resp_time":""},"client":{"id":"#","agent":"curl/7.58.0","addr":",127.0.0.1:42946"},"limit_status":{"conn":"","req":""}}
This is the access log format in case anybody wonders what are the rest of the values:
log_format main escape=json '{"t":"$time_iso8601","id":"$ring_request_id","cid":"$connection","pid":"$pid","host":"$http_host","req":"$request","scheme":"$http_x_forwarded_proto","status":"$status","req_time":"$request_time","body_sent":"$body_bytes_sent","bytes_sent":"$bytes_sent","content_length":"$content_length","request_length":"$request_length","stats":"$location_tag","upstream":{"status":"$upstream_status","sent":"$upstream_bytes_sent","received":"$upstream_bytes_received","addr":"$upstream_addr","conn_time":"$upstream_connect_time","resp_time":"$upstream_response_time"},"client":{"id":"$http_x_auth_appid$http_x_ringdevicetype#$remote_user$http_x_auth_userid","agent":"$http_user_agent","addr":"$http_x_forwarded_for,$remote_addr:$remote_port"},"limit_status":{"conn":"$limit_conn_status","req":"$limit_req_status"}}';
My question is: where could nginx have spent these 76s if the request just took 0s to be processed and responded?
Something special to mention is that the server is timing out a lof of connections with the upstreams at that moment as well: we see a lot of upstream timed out (110: Connection timed out) while reading response header from upstream and upstream server temporarily disabled while reading response header from upstream.
So, these two are related, what I can't see is why upstream timeouts would lead to a /ping taking 76s to be attended and responded when both cpu and load are low/acceptable.
Any idea?

ASP.NET Core with React - 431 Request headers too long

I have a dotnet application that I start with the dotnet run command. I also have a React app, that I start with yarn start.
When I open the browser on localhost:3000 (where the react app is) the server log looks like this:
....this goes on for long
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://localhost:5000/build/bundle.js
info: Microsoft.AspNetCore.Server.Kestrel[17]
Connection id "0HLMFVTO65C53" bad request data: "Request headers too long."
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Request headers too long.
at Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException.Throw(RequestRejectionReason reason)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.TakeMessageHeaders(ReadOnlySequence`1 buffer, SequencePosition& consumed, SequencePosition& examined)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.ParseRequest(ReadOnlySequence`1 buffer, SequencePosition& consumed, SequencePosition& examined)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.TryParseRequest(ReadResult result, Boolean& endConnection)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequestsAsync[TContext](IHttpApplication`1 application)
After like 15 seconds of this the page loads, but I get an error in the browser console about the bundle.js 431 error.
If I make the RequestHeaders max total size larger for the Kestrel server, the same thing happens but this goes on for even longer and the end result is a 500 server error instead of 431.
Moreover if I try to make a simple DELETE request to the server using postman the result is pretty much the same. As if the request was stuck in an infinite loop and then returns a 431.
Lines from Startup.cs that might be relevant:
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
// In production, the React files will be served from this directory
services.AddSpaStaticFiles(configuration =>
{
configuration.RootPath = "client/build";
}
);
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller}/{action=Index}/{id?}");
}
);
app.UseSpaStaticFiles();
app.UseSpa(spa =>
{
if (env.IsDevelopment())
spa.UseProxyToSpaDevelopmentServer("http://localhost:3000");
}
);
What is going on?
I ran into the same issue, but with Angular, using asp.net core during localhost dev. The solution for me was to set the header size for node in the "start" script inside of my package.json file.
Inside scripts json object:
"start": "node --max-http-header-size=100000 ./node_modules/#angular/cli/bin/ng serve",

Varnish Backend Handling

I am facing a rather tricky issue, where it appears that the varnish is closing the backend connection without waiting for a respones from the backend.
We are using Nginx to serve static content Below is the sequence of messages
Varnish sends POST request to App
App sends back 500 Internal Server Error
Varnish interprets the 500 internal Server Error (to display static error page)
Varnish sends GET request to Nginx server (on the same server) to serve static content
Varnish shows following error message (even though Nginx sends the response successfully within milliseconds)
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 38 boot.staticpages 127.0.0.1 82 127.0.0.1 35064
- BackendStart 127.0.0.1 82
- FetchError backend write error: 0 (Success)
- Timestamp Bereq: 1543420795.016075 5.106813 0.000099
- BackendClose 38 boot.staticpages
- Timestamp Beresp: 1543420795.016497 5.107235 0.000422
- Timestamp Error: 1543420795.016503 5.107241 0.000005
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 28 Nov 2018 15:59:55 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
Varnish then again goes the same Nginx server to display default content.
Nginx sends response and varnish accepts it and sends it back to the customer
It appears that the backend connection gets closed pretty quickly
Any help in this regard is highly appreciated
Thanks,
We resolve the issue and below is the summary of what the issue was and how we resolved it;
Issue Summary:
Varnish is displaying backend fetch error when original POST request results in 500 Internal Error and backend_response is used to GET staticpage customized 500 Internal Server Error Message
VarnishLog Output (only relevant message):
It can be seen that Backend is being closed as soon as the request is sent.
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 24 boot.staticpages 127.0.0.1 82 127.0.0.1 40696
- BackendStart 127.0.0.1 82
- FetchError backend write error: 0 (Success)
- Timestamp Bereq: 1543416195.877756 5.116981 0.000046
- BackendClose 24 boot.staticpages
- Timestamp Beresp: 1543416195.877888 5.117113 0.000132
- Timestamp Error: 1543416195.877892 5.117117 0.000004
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 28 Nov 2018 14:43:15 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
Root Cause:
Varnish can't retry because there's no body to send anymore.
Resolution:
Cache the body of the original request by using std.cache_req_body(10KB); https://varnish-cache.org/docs/trunk/reference/vmod_generated.html#func-cache-req-body

Why .net core hosted on Linux prepends a backslash on urls's hostnames?

In the log of my .net core app I can see that the URLs for all requests are prepended with a backslash. The application is hosted on Amazon Linux 2 using .net core 2.0. Here is how the url is used in the request: http://www.mvc.meetcorepoint.com/ and here is the url received by .net core http://\www.mvc.meetcorepoint.com/. Here is a part of the log that show this for other urls too:
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
dotnet-ids.mvc[22185]: Request starting HTTP/1.1 GET http://\www.mvc.meetcorepoint.com/
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[1]
dotnet-ids.mvc[22185]: Executing action method MvcClient.Controllers.HomeController.Index (MvcClient) with arguments ((null)) -
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Mvc.ViewFeatures.Internal.ViewResultExecutor[1]
dotnet-ids.mvc[22185]: Executing ViewResult, running view at path /Views/Home/Index.cshtml.
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]
dotnet-ids.mvc[22185]: Executed action MvcClient.Controllers.HomeController.Index (MvcClient) in 0.5249ms
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
dotnet-ids.mvc[22185]: Request finished in 0.7982ms 200 text/html; charset=utf-8
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
dotnet-ids.mvc[22185]: Request starting HTTP/1.1 GET http://\www.mvc.meetcorepoint.com/lib/bootstrap/dist/css/bootstrap.css
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware[2]
dotnet-ids.mvc[22185]: Sending file. Request path: '/lib/bootstrap/dist/css/bootstrap.css'. Physical path: '/var/www/dotnet/ids
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
dotnet-ids.mvc[22185]: Request finished in 1.6629ms 200 text/css
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
dotnet-ids.mvc[22185]: Request starting HTTP/1.1 GET http://\www.mvc.meetcorepoint.com/css/site.css
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware[2]
dotnet-ids.mvc[22185]: Sending file. Request path: '/css/site.css'. Physical path: '/var/www/dotnet/ids.mvc/wwwroot/css/site.cs
dotnet-ids.mvc[22185]: info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
dotnet-ids.mvc[22185]: Request finished in 0.3626ms 200 text/css
As it turns out this had nothing to do with Linux, nor with .net core itself. The problem was in my nginx configuration which is used as a reverse proxy to the .net core application. When I was coping the proxy nginx configuration from the web, it somehow prepended a backslash on the Upgrade and Host headers. So instead of having proxy_set_header Host $host; I had used proxy_set_header Host \$host;

NGINX + uWSGI Connection Reset by Peer

I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192

Resources