I have ASP.NET Web API running on IIS 8.5, and my POST method takes in a body of json document. Now a client app is using apache httpclient, which apparently automatically adds the Transfer-Encoding: chunked header into the request. My API method throws an exception because of non-existent body - it can't deserialize the json in the body, even though it looks good in the client logs.
How should I handle the request to ensure I get the whole body? I guess IIS should support transfer-encoding on the request as well, as it is part the HTTP/1.1 spec, right?
There's a similar question unanswered: Reading Body on chunked transfer encoded http requests in ASP.NET
You have to basically check the ContentLength header and set it to null if it is 0.
public class ChunkJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger)
{
content.Headers.ContentLength = (content.Headers.ContentLength == 0) ? null : content.Headers.ContentLength;
return base.ReadFromStreamAsync(type, readStream, content, formatterLogger);
}
}
Wire up this formatter
GlobalConfiguration.Configure(config =>
{
var jsonFormatter = new ChunkJsonMediaTypeFormatter() { SerializerSettings = config.Formatters.JsonFormatter.SerializerSettings };
config.Formatters.Remove(config.Formatters.JsonFormatter);
config.Formatters.Insert(0, jsonFormatter);
}
https://gist.github.com/jayoungers/0b39b66c49bf974ba73d83943c4b218b
Related
I have written a HTTP client, where I am reading the data response from a REST web service. My confusion arises after reading multiple blogs on EntityUtils.consume() and EntiryUtils.toString(). I wanted to know the following:
If EntityUtils.toString(..) ONLY is sufficient as it also closes the stream after reading char bytes. Or I should also do EntityUtils.consume(..) as a good practice.
If both toString() and consume() operation can be used. If yes, then what should be there order.
If I EntityUtils.toString() closes the stream; then why the next call in EntityUtils.consume(..) operations which is entity.isStreaming() still returns true?
Could anyone guide me here to use these operations in a standard way. I am using HTTP version 4+.
I have to use these configurations in multithreaded(web-app) environment.
Thanks
I looked at the recommended example from the apache httpclient commons website.
In the example, they used EntityUtils.toString(..) without needing to use EntityUtils.consume(..) before or after.
They mention that calling httpclient.close() ensures all resources are closed.
source: https://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientWithResponseHandler.java
CloseableHttpClient httpclient = HttpClients.createDefault();
try {
HttpGet httpget = new HttpGet("http://httpbin.org/");
System.out.println("Executing request " + httpget.getRequestLine());
// Create a custom response handler
ResponseHandler<String> responseHandler = new ResponseHandler<String>() {
#Override
public String handleResponse(
final HttpResponse response) throws ClientProtocolException, IOException {
int status = response.getStatusLine().getStatusCode();
if (status >= 200 && status < 300) {
HttpEntity entity = response.getEntity();
return entity != null ? EntityUtils.toString(entity) : null;
} else {
throw new ClientProtocolException("Unexpected response status: " + status);
}
}
};
String responseBody = httpclient.execute(httpget, responseHandler);
System.out.println("----------------------------------------");
System.out.println(responseBody);
} finally {
httpclient.close();
}
This is what is quoted for the above example:
This example demonstrates how to process HTTP responses using a response handler. This is the recommended way of executing HTTP requests and processing HTTP responses. This approach enables the caller to concentrate on the process of digesting HTTP responses and to delegate the task of system resource deallocation to HttpClient. The use of an HTTP response handler guarantees that the underlying HTTP connection will be released back to the connection manager automatically in all cases.
I'm trying to request with large javascript object like
var objectArr = [
{
NAME: "foo",
GROUP: "bar",
PHONE_NUM: "1234567890"
},
...
];
objectArr.length > 250
Actually I don't think it is large, but when I send this object to server, the server response with 500 Internal Server Error.
When I send object with length < 250, there is no error.
Following is action method of controller, request and response of it. (only selected information)
Model
public class Info
{
public string NAME { get; set; }
public string GROUP{ get; set; }
public string PHONE_NUM { get; set; }
}
Action Method
public JsonResult ReceiveJSON(List<Info> objectArr)
{
// handling objectArr --------------- (1)
return null;
}
Request Header
Accept-Encoding:gzip,deflate
Cache-Control:no-cache
Content-Length:77898
Content-Type:application/x-www-form-urlencoded;charset=UTF-8
Response Header
Cache-Control:no-cache
Expires:-1
Pragma:no-cache
Server:Kestrel
Transfer-Encoding:chunked
X-Powered-By:ASP.NET
The real problem is that when I check debug point at (1) in ReceiveJSON action method and send Json object to it from browser with POST method, the debugger do not stop at (1) and response 500 Internal Server Error. I think it is not a response error but some issue in middleware for request.
Is there any way to resolve it? Thanks
yes maxJsonLength work for asp.net freamwork, not nessesary in .net core.
An HTTP response has no size limit. JSON is already come as an HTTP response. but has no size limit either.
There might be problem if the object parsed from JSON response consumes too much memory. It'll crash the browser. So it's better you test with different data sizes and check whether your app works correctly.
try Lazy-loading may be solve for this.
I am working with ASP.NET web api. I understand response depends on the content-type header. I am asking this question assuming client will always send content-type as application/json. As suggested in this SO post, I am using following line to return JSON as the default response.
config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/html") );
Until now all my controllers were returning dynamic response & data was returned as JSON.
[Route("employee/division")]
[HttpPost]
public dynamic GetEmployeeData(FilterModel filter)
{
//getData returns a POCO
var data = getData();
//This will return valid json response to client
return data;
}
Recently, I added another endpoint which is returning HttpResponseMessage.
[HttpGet]
[Route("employee/update")]
public HttpResponseMessage Update()
{
...........
//return the response
var msg = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent("Info updated !!")
};
return msg;
}
The problem is, in this case response is returned as string. Content-Type in the response header looks like this. "content-type": "text/plain; charset=utf-8",
I have following questions:
Why am I getting different type of response when I follow different approachs?
Is the approach no. 1 which is working, is that the right way to return JSON response.
Since I wanted to set different response code, I have decided to use approach no two. Is there any downside to this approach? How can I return JSON using this approach?
Below code is written with Spring MVC. I simulate the dynamic response generation by reading a file first and send it to client.
For a GET method, the response will contain the Transfer-Encoding: chunked header rather than the Content-Length header.
For a HEAD method, how should I implement the response? Should I manually insert the Transfer-Encoding: chunked header and remove the Content-Length header?
#RestController
public class ChunkedTransferAPI {
#Autowired
ServletContext servletContext;
#RequestMapping(value = "xxx.iso", method = { RequestMethod.GET })
public void doChunkedGET(HttpServletResponse response) {
String filename = "/xxx.iso";
try {
ServletOutputStream output = response.getOutputStream();
InputStream input = servletContext.getResourceAsStream(filename);
BufferedInputStream bufferedInput = new BufferedInputStream(input);
int datum = bufferedInput.read();
while (datum != -1) {
output.write(datum); //data transfer happens here.
datum = bufferedInput.read();
}
output.flush();
output.close();
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println(e.getMessage());
}
}
#RequestMapping(value = "xxx.iso", method = { RequestMethod.HEAD })
public void doChunkedHEAD(HttpServletResponse response) {
// response.setHeader("Server", "Apache-Coyote/1.1");
// response.setHeader("Transfer-Encoding", "chunked");
}
}
My client's behavior is:
Initiate a HEAD request first to get the anticipated response size. This size is used to allocate some buffer.
Then initiate a GET request to actually get the response content and put it in the buffer.
I kind of have the feeling that I am catering to the client's behavior rather than following some RFC standard. I am worried that even if I can make the client happy with my response, it will fail with other servers' responses.
Anyone could shed some light on this? How should I implement the HEAD response?
Or maybe the client should NEVER rely on the HEAD response to decide the size of a GET response because the RFC says:
The server SHOULD send the same header fields in response to a HEAD
request as it would have sent if the request had been a GET, except
that the payload header fields (Section 3.3) MAY be omitted.
And Content-Length happens to be one of the payload header fields.
I am not able to set the keep-alive in the response header with the web Api. I am running in the IIS Express(Development server). I am using the message handler which sets the same
protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
if (request.Headers.Authorization == null)
{
var NonAuthenicateresponse = new HttpResponseMessage(HttpStatusCode.Unauthorized);
NonAuthenicateresponse.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue("NTLM"));
return NonAuthenicateresponse;
}
var response=await base.SendAsync(request,cancellationToken);
response.Headers.ConnectionClose = false;
return response;
}
Am i missing anything extra parameter for the same.
To add more details , I am trying to implement NTLM authentication for the application. Once the request is made from the browser we issue a 401 unauthorized and the negotiation happening but it is able to send the Authorization header back in the subsequent request.
We need to explicitly say to the browser about the Keep-alive the connection.