Akka Camel netty4 query based communication - one to many - tcp

I want to connect to a tcp server with proprietary protocol which provides data upon queries.
The tcp data is successfully decoded using Netty4 ServerClientHandler.
Camel consumer(client mode) is used for connecting to the server.
I can send first query through ServerClientHandler on activation.
Requirement
Query for 2000 messages.
After processing 1000 messages ask for another 1000(total 3000) messages.
Repeat
Problem is after processing 1000 messages there is no way to request another 1000 messages.
I have tried to use Netty4 sync option. But what I need is multiple responses for one request.
I was thinking of using netty4 producer. Is there any way to send one request and receive multiple responses using camel netty4 producer. If it is possible I can route responses through camel actors.
I am going through camel netty source code with no success. If you can provide any starting point it will be really helpful.

Related

When does AkkaHttp backpressure kick in?

.. when the http response entity is not consumed, or the client tcp buffer becomes full, or when the rate of client taking from its tcp buffer is lower then the rate of server pushing data to it?
I am looking for a way for to achieve the following:
Let's assume that there is a backpressure-able source of data on the server, such as an Apache Kafka topic.
If I consume this source from a remote location it may be possible that the rate at which that remote location can consume is lower - this is solved if Kafka client or consumer is used.
However let's assume that the client is a browser and that exposing direct Kafka protocol / connectivity is not a possibility.
Further, let's assume that there is a possibility of getting all the value even if jumping over some messages.
For instance in case of compacted topics, getting only the latest values for each key is enough for a client, no need to go through intermediate values.
This would be equivalent to Flowable.onBackpressureLatest() or AkkaStreams.aggregateOnBackpressure or onBackpressureAggregate.
Would it be a way to expose the topic over HTTP REST (e.g. Server Side Events / chunked transfer-encoding) or over web-sockets, that would achieve this effect of skipping over intermediate values for each key?
Please advise, thanks
Akka http supports back pressure based on TCP protocol very well and you can read about using it in combination with streaming here
Kafka consumption and exposure via http with back pressure can be easily achieved in combination of akka-http, akka-stream and alpakka-kafka.
Kafka consumers need to do polling and alpakka covers back pressure with reduction of polling requests.
I don't see the necessity of skipping over the messages when back pressure is fully supported. Kafka will keep track of the offset consumed by a consumer group (the one you pick for your service or http connection) and this will guarantee eventual consumption of all messages. Of course, if you produce messages way faster in a topic, the consumer will never catch up. Let me know if this is your case.
As a final note, you may check out Confluent REST Proxy API, which allows you to read Kafka messages in a restful manner.

Angular 6 - how to make a single http request and listens to multiple responses?

My backend generates log on processing some data and i would like to show it as a console in my frontend.
How can i implement a method that can listen to multiple response till a certain parameter is true from backend on a single http request in angular 6.
you can make use of WebSocket, i.e. make websocket connection with the your backend and get data, this is kind of push mechanism where server push data to on connection and client get data as new data is available in connection.
it not possible with help of single http request as it follows pull mechanism. so you will get data which are available. to get new data you have to perform another http request.
Unfortunately, an HTTP request cannot remain open listening for multiple responses, once it receives a response it will close the connection.
Fortunately, you can use websockets.
Implementing websockets is not too difficult, and there are many tutorials for implementing with Angular such as this one: https://tutorialedge.net/typescript/angular/angular-websockets-tutorial/
I'm not sure what back end technology you're using, but most modern ones have websocket support.
If you're not familiar with websockets in general, checkout this article: https://medium.com/#dominik.t/what-are-web-sockets-what-about-rest-apis-b9c15fd72aac
“WebSockets” is an advanced technology that allows real-time interactive communication between the client browser and a server. It uses a completely different protocol that allows bidirectional data flow, making it unique against HTTP.
The article also compares/contrasts it to HTTP, so it may give you a better understanding of HTTP as well.

How to cluster multiple HTTP Clients?

I have a use case which involves pulling/streaming data from numerous HTTP endpoints, in excess of over 100.
I have a standalone java app that can manage up to 30+ client requests using ning async client http library but was looking for some ideas on how I could scale this up to handle much more.
The use case is to pull the data from the end points and push them into a kafka queue (similar to jms queue) for processing by a storm topology. The bit I'm stuck on is how to best efficiently get the http end point data into the Kafka queues in the first place.
thanks

How does a http client associate an http response with a request (with Netty) or in general?

Is a http end point suppose to respond to requests from a particular client in order that they are received?
What about if it doesn't make sense to in the case of requests handled by cluster behind a proxy or in requests handled with NIO where one request is finished faster than the other?
Is there a standard way of associating a unique id with each http request to associate with the response? How is this handled in clients like http componenets httpclient or curl?
The question comes down to the following case:
Suppose, I am downloading a file from a server and the request is not finished. Is a client capable of completing other requests on the same keep-alive connection?
Whenever a TCP connection is opened, the connection is recognized by the source and destination ports and IP addresses. So if I connect to www.google.com on destination port 80 (default for HTTP), I need a free source port which the OS will generate.
The reply of the web server is then sent to the source port (and IP). This is also how NAT works, remembering which source port belongs to which internal IP address (and vice versa for incoming connections).
As for your edit: no, a single http connection can execute one command (GET/POST/etc) at the same time. If you send another command while you are retreiving data from a previously issued command, the results may vary per client and server implementation. I guess that Apache, for example, will transmit the result of the second request after the data of the first request is sent.
I won't re-write CodeCaster's answer because it is very well worded.
In response to your edit - no. It is not. A single persistent HTTP connection can only be used for one request at once, or it would get very confusing. Because HTTP does not define any form of request/response tracking mechanism, it simply would not be possible.
It should be noted that there are other protocols which use a similar message format (conforming to RFC822), which do allow for this (using mechanisms such as SIP's cSeq header), and it would be possible to implement this in a custom HTTP app, but HTTP does not define any standard mechanism for doing this, and therefore nothing can be done that could be assumed to work everywhere. It would also present a problem with the response for the second message - do you wait for the first response to finish before sending the second response, or try and pause the first response while you send the second response? How will you communicate this in a way that guarantees messages won't become corrupted?
Note also that SIP (usually) operates over UDP, which does not guarantee packet ordering, making the cSeq system more of a necessity.
If you want to send a request to a server while another transaction is still in progress, you will need to create a new connection to the server, and hence a new TCP stream.
Facebook did some research into this while they were building their CDN, and they concluded that you can efficiently have 2 or 3 open HTTP streams at any one time, but any more than that reduces overall transfer time because of the extra packet overhead cost. I would link to the blog entry if I could find the link...

Detecting missing responses to long running HTTP (SOAP) requests

I need a way to detect a missing response to a long running HTTP POST request. This problem arises when the network infrastructure (firewalls, proxies, unplugged cables, etc.) drops the response packets. The server may detect this failure, but the client cannot send additional bytes after the POST to probe the state of the TCP connection. The failure may be limited to a single TCP connection. For example I may be able to subsequently open a new TCP connection to the server.
I'm looking for a solution that still uses HTTP POST and does not change the duration of the server side processing.
Some solutions that I can think of are:
Provide a side channel interface to retrieve request & response history. If the history lists the response as having been send (presumably resulting in a TCP error) but I have not yet received it within a reasonable time I can generate a local error.
Use an X header to request that the server deliver "spurious" 100 Continue provisional responses on a regular interval. If I fail to see an expected 100 Continue or a non-provisional response I can generate a local error.
Is there a state of the art solution for this problem?
It sounds to me like you are using Soap for something that would be much better done using a stateful connection, or a server side push technology.

Resources