Apache Kafka: Java Producer reusability - asynchronous

Does anybody know if
kafka.javaapi.producer.Producer
can be reused among several method invocations (e.g. several send(...)) or it should be closed each time?

Yes, it can surely be reused. Producer creation is pretty slow operation because it requires establishing connection to all partitions (and probably zookeeper). So, Producers should be reused when possible.

Related

How to evenly balance processing many simultaneous tasks?

PROBLEM
Our PROCESSING SERVICE is serving UI, API, and internal clients and listening for commands from Kafka.
Few API clients might create a lot of generation tasks (one task is N messages) in a short time. With Kafka, we can't control commands distribution, because each command comes to the partition which is consumed by one processing instance (aka worker). Thus, UI requests could be waiting too long while API requests are processing.
In an ideal implementation, we should handle all tasks evenly, regardless of its size. The capacity of the processing service is distributed among all active tasks. And even if the cluster is heavily loaded, we always understand that the new task that has arrived will be able to start processing almost immediately, at least before the processing of all other tasks ends.
SOLUTION
Instead, we want an architecture that looks more like the following diagram, where we have separate queues per combination of customer and endpoint. This architecture gives us much better isolation, as well as the ability to dynamically adjust throughput on a per-customer basis.
On the side of the producer
the task comes from the client
immediately create a queue for this task
send all messages to this queue
On the side of the consumer
in one process, you constantly update the list of queues
in other processes, you follow this list and consume for example 1 message from each queue
scale consumers
QUESTION
Is there any common solution to such a problem? Using RabbitMQ or any other tooling. Š¯istorically, we use Kafka on the project, so if there is any approach using - it is amazing, but we can use any technology for the solution.
Why not use spark to execute the messages within the task? What I'm thinking is that each worker creates a spark context that then parallelizes the messages. The function that is mapped can be based on which kafka topic the user is consuming. I suspect however your queues might have tasks that contained a mixture of messages, UI, API calls, etc. This will result in a more complex mapping function. If you're not using a standalone cluster and are using YARN or something similar you can change the queueing method that the spark master is using.
As I understood the problem, you want to create request isolation from the customer using dynamically allocated queues which will allow each customer tasks to be executed independently. The problem looks like similar to Head of line blocking issue in networking
The dynamically allocating queues is difficult. This can also lead to explosion of number of queues that can be a burden to the infrastructure. Also, some queues could be empty or very less load. RabbitMQ won't help here, it is a queue with different protocol than kafka.
One alternative is to use custom partitioner in kafka that can look at the partition load and based on that load balance the tasks. This works if the tasks are independent in nature and there is no state store maintains in the worker.
The other alternative would be to load balance at the customer level. In this case you select a dedicated set of predefined queues for a set of customers. Customers with certain Ids will be getting served by a set of queues. The downside of this is some queues can have less load than others. This solution is similar to Virtual Output Queuing in networking,
My understanding is that the partitioning of the messages it's not ensuring a evenly load-balance. I think that you should avoid create overengineering and so some custom stuff that will come on top of the Kafka partitioner and instead think at a good partitioning key that will allows you to use Kafka in an efficiently manner.

The asynchronous connection pool implementation in Rust

I have a Tokio TCP back-end application, which, briefly, after receiving a request, reads something from Redis, writes something to PostgreSQL, uploads something via HTTP, sends something to RabbitMQ etc. Processing each request takes a lot of time, so a separate task for each request is created. As sharing connections is impossible in asynchronous models, some connection pooling is required. For now, new connections are established on each request, and it is extremely excessive.
I have been looking for an asynchronous connection pool implementation in Rust, but have not found any of them up to date.
I would like to hear some advice on how to implement it myself.
The only idea I have come up with is:
Implement a Stream/Sink object with an inner collection of connections. It does not matter whether it is LIFO or FIFO, since the connections are identical. On the application startup, N connections are allocated.
Now I am not sure if it is possible to share such a pool among tasks, but if it were possible, tasks would poll the stream for a connection instance (instead of establishing their own one), use it, and then put back.
If there were no connections available, the stream might establish more of them or ask the task to hang on (depending on its configuration).
If a connection fails, it gets dropped and the pool now contains N-1 connections, so it may decide to allocate a new one on the next request.
So I have two problems I cannot find proper answers anywhere:
Must/can/should I share the stream/sink-pool among tasks in some way? Anyway, I see some Shared futures in the futures crate.
There are some gloomy points in the tokio/futures tutorial. E.g. it does not explain how do I notify the uppermost task, that is, how do I implement the mythical innermost future, which does not pool anything itself, but still has to notify the upper futures.
Or is my approach completely wrong? I could start playing with it by myself, but I have a strong suspicion that I have missed something, e.g. a one-click solution.

Why use more than one endpoint in a Rebus system?

In a Rebus service bus, there is a single message transport queue per endpoint. It is possible for an endpoint to handle more than one message, and it is possible to have only a single endpoint in a system.
Other than the throughput of messages, what reasons are there to use more than a single endpoint in a Rebus service bus system?
Excellent question! :) There can be many reasons why you might want to have several Rebus endpoints active at the same time.
An obvious reason is that you might want to host the endpoints in separate processes so you can update them independently of each other. But since this reason is pretty obvious, I assume you are thinking about reasons one might want to host multiple Rebus endpoints in the same process.
Let me just mention a few(*):
Concurrency requirements
One endpoint might be hosting data that experiences contention and therefore does not benefit from being able to process messages concurrently - this endpoint will probably have only a few threads and low parallelism, possibly 1/1.
Another endpoint might be doing stream-based data processing (e.g. loading blobs from one place into another, downloading data from web services, etc.), which can be done with very high throughput and low resource requirements with one single thread and a high level of parallelism - e.g. 1/20.
Yet another endpoint might be doing a lot of serialization/deserialization, which is usually CPU-bound, and therefore might benefit from running on a many-core box with many worker threads and matching parallelism - e.g. 10/10.
As you can see, the type of tasks performed by an endpoint can call for a configuration that matches the nature of the tasks.
SLAs
One endpoint might be designated for processing low-priority background stuff, like e.g. moving data to cold storage, optimizing storage of historic data, etc.
Another endpoint might be processing messages where low latency is the most important quality attribute.
If these two were using the same queue, the low-priority background stuff could sometimes clog up the queue, hindering low-latency processing of the other messages.
Logical separation
I have many times started out by hosting several Rebus endpoints in the same process because it was easy to deal with during development, while keeping the endpoints separate because they were implementing different business functions.
This way it is easy to physically break them apart some time later on, allowing for a higher degree of separation and independence.
(*) Udi Dahan works with the concepts "business components" and "autonomous components" where the first one is an implementation of a business capability and the second one is what business components are decomposed into, mostly for technical reasons.
I guess you could say that the first two reasons I mentioned are separate endpoints for "autonomous component" reasons, whereas the third is separation because things belong to different business components.
Udi keeps a pretty strict view of these concepts that is completely orthogonal to how the system is physically composed, but I almost always end up with pretty high convergence between logical separation and physical separation.

When a queue should be used?

Suppose we were to implement a network application, such as a chat with a central server and several clients: we assume that all communication must go through the central server, then it should pick up messages from some clients and forward them to target clients, and so on.
Regardless of the technology used (sockets, web services, etc..), it is possible to think that there are some producer threads (that generate messages) and some consumer threads (that read messages).
For example, you could use a single queue for incoming and outgoing messages, but using a single queue, you couldn't receive and send messages simultaneously, because only one thread at a time can access the queue.
Perhaps it would be more appropriate to use two queues: for example, this article explains a way in which you can manage a double queue so that producers and consumers can work almost simultaneously. This scenario may be fine if there are only a producer and a consumer, but if there are many clients:
How to make so that the central server can receive data simultaneously from multiple input streams?
How to make so that the central server can send data simultaneously to multiple output streams?
To resolve this problem, my idea is to use a double queue for each client: on the central server, each client connection may be associated with two queues, one for incoming messages from that client and one for outgoing messages addressed to that client. In this way the central server may send and receive data simultaneously on almost all the connections with the clients...
There are probably other ways to manage the queues ... What are the parameters to determine how many queues are needed and how to organize them? There are cases that do not need any queue?
To me, this idea of using a queue per client or multiple queues per client seems to miss the point. First of all, it is absolutely possible to build a queue which can be accessed simultaneously by 2 threads (one can be enqueueing an item while a different one is dequeueing another item). If you want to know how, post a specific question about that.
Second, even if we assume that only 1 thread at a time can access a single queue, and even if we assume that the server will be receiving or sending data to/from all the clients simultaneously, it still doesn't follow that you need a different queue for each client. To avoid limiting system performance, you just need to allow enough concurrency to utilize all the server's CPUs. Even with a single, system-wide queue, if dequeueing/enqueueing messages is fast enough compared to the other work the server is doing, it might not be a bottleneck. (And with an efficient implementation, simply inserting an item or removing an item from a queue should be very fast. It's a very simple operation.) For that message queue to become the bottleneck limiting performance, either you would need a LOT of CPUs, or everything else the server was doing would have to be very fast. In that case, you could work out some scheme with 2 or 4 system-wide queues, to allow 2x or 4x more concurrency.
The whole idea of using work queues in a multi-threaded system is that they 1) allow multiple consumers to all grab work from a single location, so producers can "dump" whatever work they need done at that single location without worrying about which consumer will do it, and 2) function as a load-balancing mechanism for the consumers. (Additionally, a work queue can act as a "buffer" if producers temporarily generate work too fast for the consumers.) If you have a dedicated pair of producer-consumer threads for each client, it calls into question why you need to use queues at all. Why not just do a synchronous "pass off" from dedicated producer to corresponding dedicated consumer? Or, why not use a single thread per client which acts as both producer and consumer? Using queues in the way which you are proposing doesn't seem to really gain anything.

What is a full-fledged process?

While I was reading the book - O'Reilly Java Servlet Programming. There was a statement that I couldn't understand, the text is as below:
Servlets may also be allowed to persist between requests as object
instances, taking up far less memory than full-fledged processes.
May I know how could I know whether Servlet is taking far less memory than full fledged processes?
Hard to tell what is this fragment about without more context but I guess this is a comparison between servlets and cgi. Basically in a single JVM/servlet container you can deploy several singleton servlets. This means one servlet (occupying very little memory) is capable of handling unlimited number of requests (hardware limitations put aside).
With CGI you had to create a single process per request, which might cause more latency and mentioned high memory usage.

Resources