I have an DotNet Core API which we are running in Kestrel Server
In the response headers it adds the header "Content-Length" which is causing an issue.
Is there any way to remove the Content-Length Header for all the responses ?
I tried below code But it only removes the "Server: Kestrel" type header.
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
.UseKestrel(options => options.AddServerHeader = false);
});
If I remember correctly, AddServerHeader only corresponds to the type of WebServer, meaning "Kestrel" keyword.
I don't think there is a built-in mechanism to remove "Content-Length", thus you should create a middleware and after the result you should remove this header.
I would also indicate that when removing this header the client will continue reading the response (or wait), so you will need to force close the connection each time, which isn't recommended.
You can further read here: https://stackoverflow.com/a/15995101/303254
Related
I have 2 dot net core applications. One being used as an API server and the other as an angular web client.
Just to get things working for the sake of getting work done, I had put this in the ConfigureServices method of my API server's startup. I believed this would just allow any requests..
services.AddCors(options =>
{
options.AddPolicy(ALL_ORIGINS,
builder =>
{
builder.WithOrigins("*");
});
});
It seemed to be working but all of my requests were GETs. The first time I tried to POST I got the error
Request header field content-type is not allowed by
Access-Control-Allow-Headers in preflight response
Figuring I had a crappy temporary solution in place on the API server I put the same "*" cors policy in place in the angular web client server and it didn't help.
I'm not sure what I need to do here..
Try this
services.AddCors(options =>
{
options.AddPolicy("AllowAllOrigins",
builder =>
{
builder.AllowAnyOrigin();
builder.AllowAnyHeader();
builder.AllowAnyMethod();
});
});
That should work at least in .NET Core 2.1
I want to use HERE maps autocomplete in my project.
But when a i send request like the one in documentation
this.axios.get('http://autocomplete.geocoder.api.here.com/6.2/suggest.json
?app_id={YOUR_APP_ID}
&app_code={YOUR_APP_CODE}
&query=Pariser+1+Berl
&beginHighlight=<b>
&endHighlight=</b>'
)
.then(response => {
console.log(response)
})
.catch(error => {
console.log(error)
})
i get an error
OPTIONS http://autocomplete.geocoder.api.here.com/6.2/suggest.json?{...} 405
Access to XMLHttpRequest at 'http://autocomplete.geocoder.api.here.com/6.2/suggest.json?{...}' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.
In Chrome developer console in network panel i check this
Provisional headers are shown
Access-Control-Request-Headers: x-auth-token, content-type
Access-Control-Request-Method: GET
I set content-type in request headers to application/json and Provisional headers changed
to Access-Control-Request-Headers: x-auth-token
Access-Control-Request-Method: GET
So if i understand right, i should set x-auth-token header. But where can i take this token?
Or may be this problem has another reason?
There's nothing about such problems in documentaion.
The problem was simple and a bit stupid.
When user authenticated in my app I added default header to axios
axios.defaults.headers.common['X-Auth-Token'] = token
so this header was sended to all requests.
But HERE-map API doesn't want this header in requests and this was the cause of the problem.
The solution was to remove this header from requests to HERE-map API.
For those who have defined by default the header :
'Content-Type': 'application/json'
You must deactivate it, there should not be any HttpHeaders on the call request to Here API services.
temporary install Allow-Control-Allow-Origin google chrome plugin .. installed then you can show top right side click on that and switch the button then refresh then again call your api and get the response.
I'm using Asp.Net Core RC2 and Kestrel as my web server. I need to ensure that requests (in this case all of them) are responded to with a no-cache header so that the browsers get the newest version (not 304).
Is there a way in Startup to configure Kestrel or a way to inject this step into the pipeline?
EDIT: no-store may be a better choice in my situation: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching "no-store Response is not allowed to be cached and must be fetched in full on every request."
You can use middleware to work with headers. For example, you can force no-cache cache-control by adding the following to the top of your Startup's Configure method:
app.Use(async (httpContext, next) =>
{
httpContext.Response.Headers[HeaderNames.CacheControl] = "no-cache";
await next();
});
I have just run across an issue where setting response.headers['Content-Length'] in a Rails 3.2.2 application causes Nginx to throw a "502 Bad Gateway" error.
I have an action in a controller that uses the send_data method to send raw JPEG data contained within a variable. Before, I had a problem with some browsers not downloading the entire image being sent and discovered that no Content-Length header is being sent, so I decided to use the .bytesize property of the variable containing the JPEG data as the Content-Length.
This works fine in development (using Unicorn) and there is now a Content-Length header where there wasn't one before, but in production where I'm using Nginx and Passenger, I get the afore-mentioned 502 Bad Gateway. Also, in the Nginx error log, I see:
[error] 30574#0: *1686 upstream prematurely closed connection while reading response header from upstream
There are no matching entries in the Rails production log, which tells me the application is fine.
I've commented out the line where I set the Content-Length header, and the problem went away. I'm still testing whether I actually need to send a Content-Length header, but meanwhile, I thought I might post this out of curiosity to see if anyone has any thoughts.
AHA! I had to convert the size to a string by adding the .to_s method. So, my final result is
response.headers['Content-Length'] = photo_data.bytesize.to_s
send_data photo_data, :type => :jpg, :filename => 'file_name.jpg', :disposition => 'attachment'
So, it would seem that it is okay to send the Content-Length header on Nginx/Passenger, but Passenger chokes if it's not explicitly a string.
Our web application (ASP.NET Web Forms) has a page that will display a recently generated PDF file to users. Because the PDF file is sometimes quite large, we've implemented a "streaming" approach to send it down to the client browser in chunks.
Despite sending the data down in chunks, we know the full size of the file prior to sending it, so we set the Content-Length header appropriately. This has been working in our production environment for awhile (and continues to work in our test environment with a virtually identical configuration) until today. The issue reported was that Chrome would attempt to open the PDF file but would hang with the "Loading" animation stuck.
Because everything was still working fine in our test environment I was able to use Firebug to take a look at the response headers that were coming back in both environments. In the test environment, I was seeing a proper 'Content-Length' header, while in production that had been replaced with a Transfer-Encoding: chunked header. Chrome doesn't like this, hence the hang-up.
I've read some articles and posts talking about how the Transfer-Encoding header can show up when no Content-Length header is provided, but we are specifying the Content-Length header and everything still appears to work while running the same code for the same PDF file on a test server.
Both test and production servers are running IIS 7.5 and both have Dynamic and Static Compression enabled.
Here is the code in question:
var fileInfo = new FileInfo(fileToSendDown);
Response.ClearHeaders();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", "filename=test.pdf");
Response.AddHeader("Content-Length", fileInfo.Length.ToString());
var buffer = new byte[1024];
using (var fs = File.Open(file, FileMode.Open, FileAccess.Read, FileShare.Read))
{
int read;
while ((read = fs.Read(buffer, 0, 1024)) > 0)
{
if (!response.IsClientConnected) break;
Response.OutputStream.Write(buffer, 0, read);
Response.Flush();
}
}
I was fortunate to see the same behavior on my local workstation so using the debugger I have been able to see that the 'Transfer-Encoding: chunked' header is being set on the 2nd pass through the while loop during the call to 'Flush'. At that point, the response has both a Content-Length header and Transfer-Encoding header, but somehow by the time the response reaches the browser Firebug is only showing the Transfer-Encoding header.
UPDATE
I think I've tracked this down to using a combination of sending the data down in "chunks" AND attaching a 'Filter' to the HttpResponse object (we were using a filter to track the size of viewstate being sent down to each page). There's no sense in us using an HTTP filter when sending a PDF down to the browser, so clearing the filter here has resolved our issue. I decided to dig in a little deeper purely out of curiosity and have updated this question should anyone else ever stumble onto this problem in the future.
I've got a simple app up on AppHarbor that reproduces the issue: http://transferencodingtest.apphb.com/. If you check both the 'Use Filter?' and 'Send In Chunks?' boxes you should be able to see the 'transfer-encoding: chunked' header show up (using Chrome dev tools, Firebug, Fiddler, whatever). If either of the boxes are not checked, you'll get a proper content-length header. The underlying code is up on github so you can see what's going on behind the scenes:
https://github.com/appakz/TransferEncodingTest
Note that to repro locally you'd need to setup a local website in IIS 7.5 (7 may also work, I haven't tried). The ASP .NET development server that ships with Visual Studio DOES NOT repro the issue.
I've added some more details to a blog post here: 'Content-Length' Header Replaced With 'Transfer-Encoding: Chunked' in ASP .NET
From an article on MSDN it seems that you can disable chunked encoding:
appcmd set config /section:asp /enableChunkedEncoding:False
But it's mentioned under ASP settings, so it may not apply to a response generated from an ASP.NET handler.
Once Response.Flush() has been called, the response body in in the process of being sent to the client, so no additional headers can be added to the response. I find it very unlikely that a second call to Response.Flush() is adding the Transfer-Encoding header at that time.
You say you have compression enabled. That almost always requires a chunked response. So it would make sense that if the server knows the Content-Length prior to compression, it might substitute that header for the Transfer-Encoding header and chunk the response. However, even with compression enabled on the server, the client has to explicitally state support for compression in its Accept-Encoding request header or else the server cannot compress the response. Did you check for that in your tests?
On a final note, since you are calling Response.Flush() manually, try setting Response.Buffer = True and Response.BufferOutput = False. Apparently they have conflicting effects on how Response.Flush() operates. See the comment at the bottom of this page and this page.
I had a similar problem when I was writing a large CSV (The file didn't exist I write a string line by line by iterating through an in memory collection and generating the line) by calling Response.Write on the Response stream with BufferOutput set to false, but the solution was to change
Reponse.ContentType = 'text/csv' to Reponse.ContentType = 'application/octet-stream'
When the content type wasn't set to application/octet-stream a bunch of other response headers were added such as Content-Encoding - gzip