Jetty chunked transfer encoding without keep-alive - asynchronous

I'm streaming data over HTTP using Jetty as a server. The data comes from a message queue, and I'm handling the processing asynchronously, printing messages to the connection when it is ready and there are messages available. When the client disconnects I stop consuming messages from the queue and clean up all resources dedicated to the stream.
Jetty chooses send the response chunked and sets Transfer-Encoding: chunked by default – this is what I want, since this is an endless stream and I obviously can't set a Content-Length header.
However, I also need to set Connection: close on the response. The server will run behind a load balancer that will try to keep a persistent connection to the backend servers, unless they explicitly send Connection: close. I have no way to configure the load balancer, it is completely out of my hands. If the load balancer keeps the connection open I have no way to know when to stop consuming from the message queue, because the connection will remain open.
The problem is that when I do response.setHeader("Connection", "close") Jetty stops rendering the response as chunked. It also does not set a Content-Length header, it just streams the output to the connection. As far as I understand this is not OK in HTTP, even though many clients probably will handle it. I would really like to use chunked transfer encoding, and also disable keep-alive. How can I convince Jetty of this?
Here is a minimal example that shows what I do. If I remove the line that sets the Connection header Jetty chunks the response, but with it it does not.
public class StreamingServer {
public static void main(String[] args) throws Exception {
Server server = new Server(2000);
server.setHandler(new AbstractHandler() {
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
response.setBufferSize(1024);
// if I remove this line I get Transfer-Encoding: chunked
response.setHeader("Connection", "close");
response.flushBuffer();
AsyncContext asyncContext = request.startAsync();
asyncContext.setTimeout(0);
final ServletOutputStream out = response.getOutputStream();
// start consuming messages from the message queue here
out.setWriteListener(new WriteListener() {
public void onError(Throwable t) {
// stop consuming messages and clean up resources
}
public void onWritePossible() throws IOException {
while (out.isReady()) {
// send the next available message from the queue
out.print(...);
}
}
});
}
});
server.start();
server.join();
}
}

Related

Apache Http EntityUtils.consume() vs EntityUtils.toString()?

I have written a HTTP client, where I am reading the data response from a REST web service. My confusion arises after reading multiple blogs on EntityUtils.consume() and EntiryUtils.toString(). I wanted to know the following:
If EntityUtils.toString(..) ONLY is sufficient as it also closes the stream after reading char bytes. Or I should also do EntityUtils.consume(..) as a good practice.
If both toString() and consume() operation can be used. If yes, then what should be there order.
If I EntityUtils.toString() closes the stream; then why the next call in EntityUtils.consume(..) operations which is entity.isStreaming() still returns true?
Could anyone guide me here to use these operations in a standard way. I am using HTTP version 4+.
I have to use these configurations in multithreaded(web-app) environment.
Thanks
I looked at the recommended example from the apache httpclient commons website.
In the example, they used EntityUtils.toString(..) without needing to use EntityUtils.consume(..) before or after.
They mention that calling httpclient.close() ensures all resources are closed.
source: https://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientWithResponseHandler.java
CloseableHttpClient httpclient = HttpClients.createDefault();
try {
HttpGet httpget = new HttpGet("http://httpbin.org/");
System.out.println("Executing request " + httpget.getRequestLine());
// Create a custom response handler
ResponseHandler<String> responseHandler = new ResponseHandler<String>() {
#Override
public String handleResponse(
final HttpResponse response) throws ClientProtocolException, IOException {
int status = response.getStatusLine().getStatusCode();
if (status >= 200 && status < 300) {
HttpEntity entity = response.getEntity();
return entity != null ? EntityUtils.toString(entity) : null;
} else {
throw new ClientProtocolException("Unexpected response status: " + status);
}
}
};
String responseBody = httpclient.execute(httpget, responseHandler);
System.out.println("----------------------------------------");
System.out.println(responseBody);
} finally {
httpclient.close();
}
This is what is quoted for the above example:
This example demonstrates how to process HTTP responses using a response handler. This is the recommended way of executing HTTP requests and processing HTTP responses. This approach enables the caller to concentrate on the process of digesting HTTP responses and to delegate the task of system resource deallocation to HttpClient. The use of an HTTP response handler guarantees that the underlying HTTP connection will be released back to the connection manager automatically in all cases.

Java gRPC: exception from client to server

Is it's possible to throw an exception from the client to the server?
We have an open stream from the server to the client:
rpc addStream(Request) returns (stream StreamMessage) {}
When i try something like this:
throw Status.INTERNAL.withDescription(e.getMessage()).withCause(e.getCause()).asRuntimeException();
I got the exception in the StreamObserver.onError on the client, but there is no exception on the server-side.
Servers can respond with a "status" that the stub API exposes as a StatusRuntimeException. Clients, however, can only "cancel" the RPC. Servers will not know the source of the cancellation; it could be because the client cancelled or maybe the TCP connection broke.
In a client-streaming or bidi-streaming call, the client can cancel by calling observer.onError() (without ever calling onCompleted()). However, if the client called onCompleted() or the RPC has a unary request, then you need to use ClientCallStreamObserver or Context:
stub.someRpc(request, new ClientResponseObserver<Request, Response>() {
private ClientCallStreamObserver<Request> requestStream;
#Override public void beforeStart(ClientCallStreamObserver<Request> requestStream) {
this.requestStream = requestStream;
}
...
});
// And then where you want to cancel
// RequestStream is non-thread-safe. For unary requests, wait until
// stub.someRpc() returns, since it uses the stream internally.
// The string is not sent to the server. It is just "echoed"
// back to the client's `onError()` to make clear that the
// cancellation was locally caused.
requestStream.cancel("some message for yourself", null);
// For thread-safe cancellation (e.g., for client-streaming)
CancellableContext ctx = Context.current().withCancellation();
StreamObserver requestObserver = ctx.call(() ->
stub.someRpc(new StreamObserver<Response>() {
#Override public void onCompleted() {
// The ctx must be closed when done, to avoid leaks
ctx.cancel(null);
}
#Override public void onError() {
ctx.cancel(null);
}
}));
// The place you want to cancel
ctx.cancel(ex);

Does HttpGet, HttpPost abort() method aborts the request even if it is taking more time to establish the connection

I have a scenario where in certain cases request need to be terminated based on alternate configuration. From https://www.baeldung.com/httpclient-timeout I understood that we can set hard time out. However not sure how to test this.
Does the below code aborts the request with in given time even if there is a scenario of connection or socket or read timeout
HttpGet getMethod = new HttpGet(
"http://localhost:8080/httpclient-simple/api/bars/1");
int hardTimeout = 5; // seconds
TimerTask task = new TimerTask() {
#Override
public void run() {
if (getMethod != null) {
getMethod.abort();
}
}
};
new Timer(true).schedule(task, hardTimeout * 1000);
HttpResponse response = httpClient.execute(getMethod);
For instance if connection time out is set to 10 seconds and it is taking more than 10 seconds then does it terminate in 5 seconds. Similarly for other timeout scenarios.
If Apache httpclient library does not support this, is there an alternative?
Thanks in advance.
Look here for setting connection and read timeouts with apache http client.

How should I implement the HEAD response for a dynamically generated resource?

Below code is written with Spring MVC. I simulate the dynamic response generation by reading a file first and send it to client.
For a GET method, the response will contain the Transfer-Encoding: chunked header rather than the Content-Length header.
For a HEAD method, how should I implement the response? Should I manually insert the Transfer-Encoding: chunked header and remove the Content-Length header?
#RestController
public class ChunkedTransferAPI {
#Autowired
ServletContext servletContext;
#RequestMapping(value = "xxx.iso", method = { RequestMethod.GET })
public void doChunkedGET(HttpServletResponse response) {
String filename = "/xxx.iso";
try {
ServletOutputStream output = response.getOutputStream();
InputStream input = servletContext.getResourceAsStream(filename);
BufferedInputStream bufferedInput = new BufferedInputStream(input);
int datum = bufferedInput.read();
while (datum != -1) {
output.write(datum); //data transfer happens here.
datum = bufferedInput.read();
}
output.flush();
output.close();
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println(e.getMessage());
}
}
#RequestMapping(value = "xxx.iso", method = { RequestMethod.HEAD })
public void doChunkedHEAD(HttpServletResponse response) {
// response.setHeader("Server", "Apache-Coyote/1.1");
// response.setHeader("Transfer-Encoding", "chunked");
}
}
My client's behavior is:
Initiate a HEAD request first to get the anticipated response size. This size is used to allocate some buffer.
Then initiate a GET request to actually get the response content and put it in the buffer.
I kind of have the feeling that I am catering to the client's behavior rather than following some RFC standard. I am worried that even if I can make the client happy with my response, it will fail with other servers' responses.
Anyone could shed some light on this? How should I implement the HEAD response?
Or maybe the client should NEVER rely on the HEAD response to decide the size of a GET response because the RFC says:
The server SHOULD send the same header fields in response to a HEAD
request as it would have sent if the request had been a GET, except
that the payload header fields (Section 3.3) MAY be omitted.
And Content-Length happens to be one of the payload header fields.

GWT dealing with request error

I have a GWT module and in it I navigate to a different URL via:
Window.Location.assign(url);
The navigated url is then handled by a servlet, up until this point if there was an error it was handle by the resp.sendError methode
resp.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Failed.");
Which would then navigate to the browsers error page. However I wanted to know is there away I can not navigate to an error page? i.e. I would be able to check in my GWT code if there was an error and then do something? Like resend the request ect.
Thanks!
When you navigate away from your webapplication that's that. Instead of using Window.Location.assign you should make an HTTP request still from your webapplication, for example using RequestBuilder.
Example from the docs mentioned earlier:
import com.google.gwt.http.client.*;
...
String url = "http://www.myserver.com/getData?type=3";
RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL.encode(url));
try {
Request request = builder.sendRequest(null, new RequestCallback() {
public void onError(Request request, Throwable exception) {
// Couldn't connect to server (could be timeout, SOP violation, etc.)
}
public void onResponseReceived(Request request, Response response) {
if (200 == response.getStatusCode()) {
// Process the response in response.getText()
} else {
// Handle the error. Can get the status text from response.getStatusText()
}
}
});
} catch (RequestException e) {
// Couldn't connect to server
}
Note that this will work only if your servlet and webapplication are on the same address (domain, port, protocol), because of Same Origin Policy. If that's not the case, there are still some options, like JSON with padding (which GWT supports via JsonpRequestBuilder).

Resources