Throttling outgoing HTTP requests with Async Jersey HTTP Client using RxJava - http

I am trying to throttle outgoing http requests using Jersey Client. Since I am running is a Vertx Verticle I created a special RateLimiter class to handle throttling.
My goal is to prevent HTTP calls from being made at a greater rate than 1 per second. the idea is that a submitted callable will run using the single threaded ExecutorService so that I can block that single thread in order to guarantee that these tasks are not handled in a greater rate.
Basically the only public method in this class is "call" :
public <T> Observable<T> call(Callable<Observable<T>> action) {
return Observable.create(subscriber -> {
Observable<Observable<T>> observed =
Observable.from(executor.submit(() -> {
return action.call();
})
).doOnError(throwable -> {
logger.error(throwable);
}
);
observed.subscribe(t -> {
try {
Thread.sleep(1000);
t.subscribe(data -> {
try {
subscriber.onNext(data);
} catch (Throwable e) {
subscriber.onError(e);
}
subscriber.onCompleted();
});
} catch (Exception e) {
logger.error(e);
}
});
});
}
this is my current implementation which uses 1 second sleep no matter how much time has passed since the previous call. initially I tried using a ScheduledExecutorService and calculate the delay time so that I will submit requests exactly at the rate of 1 per second. however, in both cases it often fails to meet the rate restrictions and I get two requests submitted immediately one after the other.
My assumption is that somewhere the requests is being handed to a different executing queue which is being polled by a different thread continuously, so that if for some reason that thread was busy and two requests exist in the queue at the same time, they will be executed sequentially but with no delays.
Any Ideas how to resolve this? maybe a different approach?

I would go with simple Vertx event bus and a queue, from which you poll every second:
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new DebounceVerticle(), (r) -> {
// Ok, verticle is ready!
// Request to send 10 events in 1 second
for (int i = 0; i < 10; i++) {
vertx.eventBus().publish("call", UUID.randomUUID().toString());
}
});
}
private static class DebounceVerticle extends AbstractVerticle {
HttpClient client;
#Override
public void start() {
client = vertx.createHttpClient();
BlockingQueue<String> queue = new LinkedBlockingQueue<>();
vertx.eventBus().consumer("call", (payload) -> {
String message = (String) payload.body();
queue.add(message);
System.out.println(String.format("I got %s but I don't know when it will be executed", message));
});
vertx.setPeriodic(1000, (l) -> {
String message = queue.poll();
if (message != null) {
System.out.println(String.format("I'm finally sending %s", message));
//Do your client magic
}
});
}
}

Just prepend web service call with guava RateLimiter. Here's an example in RxJava which shows how events every 500ms are throttled to be once per second.
Function<Long, Long> throttlingFunction = new Function<Long, Long>() {
private RateLimiter limiter = RateLimiter.create(1.0);
public Long apply(Long t) throws Exception {
limiter.acquire();
return t;
}
};
Observable.interval(500, TimeUnit.MILLISECONDS)
.map(throttlingFunction)
.subscribe(new Consumer<Long>() {
public void accept(Long t) throws Exception {
System.out.println(t);
}
});
Also in vert.x all the blocking stuff is supposed to be run with the help of executeBlocking.

Related

.NET 6 Async Semaphore Error Under Mild Load

I'm working on a basic (non DB) connection pool which allows only 1 connection to be created per project.
The connection pool supports an async-task/threaded environment and therefor I have made use of a semaphore instead of a regular Lock.
I wrote a test, below, which is meant to stress test the connection pool.
The code works but under higher loads, the semaphore throws the following error
I can overcome this error by decreasing the load.
For example, increasing the _waitTimeMs to a higher number (i.e. 50ms or 100ms or 1000ms) or decreasing _numberOfTasks (i.e. to 5 or 3).
I should also mention that sometimes, it manages to run higher load tests without errors.
Is there a mistake or misconception in my code and/or use of semaphores?
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
internal class Program
{
static int _numberOfTasks = 50;
static int _waitTimeMs = 1;
static SemaphoreSlim _dictLock = new SemaphoreSlim(1, 1);
static ConcurrentDictionary<string, bool> _pool = new ConcurrentDictionary<string, bool>();
/// <summary>
/// Only 1 connection allowed per project.
/// We reuse connections if available in pool, otherwise we create 1 new connection.
/// </summary>
static async Task<string> GetConnection(string projId)
{
try
{
// Enter sema lock to prevent more than 1 connection
// from being added for the same project.
if (await _dictLock.WaitAsync(_waitTimeMs))
{
// Try retrieve connection from pool
if (_pool.TryGetValue(projId, out bool value))
{
if (value == false)
return "Exists but not connected yet.";
else
return "Success, exists and connected.";
}
// Else add connection to pool
else
{
_pool.TryAdd(projId, false);
// Simulate delay in establishing new connection
await Task.Delay(2);
_pool.TryUpdate(projId, true, false);
return "Created new connection successfully & added to pool.";
}
}
// Report failure to acquire lock in time.
else
return "Server busy. Please try again later.";
}
catch (Exception ex)
{
return "Error " + ex.Message;
}
finally
{
// Ensure our lock is released.
_dictLock.Release();
}
}
static async Task Main(string[] args)
{
if (true)
{
// Create a collection of the same tasks
List<Task> tasks = new List<Task>();
for (int i = 0; i < _numberOfTasks; i++)
{
// Each task will try to get an existing or create new connection to Project1
var t = new Task(async () => { Console.WriteLine(await GetConnection("Project1")); });
tasks.Add(t);
}
// Execute these tasks in parallel.
Parallel.ForEach<Task>(tasks, (t) => { t.Start(); });
Task.WaitAll(tasks.ToArray());
Console.WriteLine("Done");
Console.Read();
}
}
}
Is there a mistake or misconception in my code and/or use of semaphores?
There's a bug in your code, yes. If the WaitAsync returns false (indicating that the semaphore was not taken), then the semaphore is still released in the finally block.
If you must use a timeout with WaitAsync (which is highly unusual and questionable), then your code should only call Release if the semaphore was actually taken.

Use a message for a topic in Confluent.Kafka when consumer run

I'm using Confluent.Kafka(1.4.4) in a .netCore project as a message broker. In the startup of the project I set only "bootstrapservers" to the specific servers which were in the appSetting.json file and I produce messages in an API when necessary with the code below in related class:
public async Task WriteMessage<T>(string topicName, T message)
{
using (var p = new ProducerBuilder<Null, string>(_producerConfig).Build())
{
try
{
var serializedMessage= JsonConvert.SerializeObject(message);
var dr = await p.ProduceAsync(topicName, new Message<Null, string> { Value = serializedMessage });
logger.LogInformation($"Delivered '{dr.Value}' to '{dr.TopicPartitionOffset}'");
}
catch (ProduceException<Null, string> e)
{
logger.LogInformation($"Delivery failed: {e.Error.Reason}");
}
}
}
I have also added the following code In the consumer solution :
public async Task Run()
{
using (var consumerBuilder = new ConsumerBuilder<Ignore, string>(_consumerConfig).Build())
{
consumerBuilder.Subscribe(new List<string>() { "ActiveMemberCardForPanClubEvent", "CreatePanClubEvent", "RemovePanClubEvent"
});
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
try
{
while (true)
{
try
{
var consumer = consumerBuilder.Consume(cts.Token);
if (consumer.Message != null)
{
using (LogContext.PushProperty("RequestId", Guid.NewGuid()))
{
//Do something
logger.LogInformation($"Consumed message '{consumer.Message.Value}' at: '{consumer.TopicPartitionOffset}'.");
await DoJob(consumer.Topic, consumer.Message.Value);
consumer.Topic.Remove(0, consumer.Topic.Length);
}
}
else
{
logger.LogInformation($"message is null for topic '{consumer.Topic}'and partition : '{consumer.TopicPartitionOffset}' .");
consumer.Topic.Remove(0, consumer.Topic.Length);
}
}
catch (ConsumeException e)
{
logger.LogInformation($"Error occurred: {e.Error.Reason}");
}
}
}
catch (OperationCanceledException)
{
// Ensure the consumer leaves the group cleanly and final offsets are committed.
consumerBuilder.Close();
}
}
}
I produce a message and when the consumer project is run everything goes perfectly and the message is being read in the consumer solution.
The problem is raised when the consumer project is not run and I queue a message in the API with the message producer in API. After running consumers there is not any valid message for that topic that it's message is being produced.
I am familiar with and have experiences with message brokers and I know that by sending a message it will be on the bus until it is being used but I don't understand why it doesn't work with Kafka in this project.
The default setting for the "auto.offset.reset" Consumer property is "latest".
That means (in the context of no offsets being written yet) if you write a message to some topic and then subsequently start the consumer, it will skip past any messages written before the consumer was started. This could be why your consumer is not seeing the messages queued by your producer.
The solution is to set "auto.offset.reset" to "earliest" which means that the consumer will start from the earliest offset on the topic.
https://docs.confluent.io/current/installation/configuration/consumer-configs.html#auto.offset.reset

How to make a Blazor page update the content of one html tag with incoming data from gRPC service

So i'm testing with Blazor and gRPC and my dificulty at the moment is on how to pass the content of a variable that is on a class, specifically the gRPC GreeterService Class to the Blazor page when new information arrives. Notice that my aplication is a client and a server, and i make an initial comunication for the server and then the server starts to send to the client data(numbers) in unary mode, every time it has new data to send. I have all this working, but now i'm left it that final implementation.
This is my Blazor page
#page "/greeter"
#inject GrpcService1.GreeterService GreeterService1
#using BlazorApp1.Data
<h1>Grpc Connection</h1>
<input type="text" #bind="#myID" />
<button #onclick="#SayHello">SayHello</button>
<p>#Greetmsg</p>
<p></p>
#code {
string Name;
string Greetmsg;
async Task SayHello()
{
this.Greetmsg = await this.GreeterService1.SayHello(this.myID);
}
}
The method that later receives the communication from the server if the hello is accepted there is something like this:
public override async Task<RequestResponse> GiveNumbers(BalconyFullUpdate request, ServerCallContext context)
{
RequestResponse resp = new RequestResponse { RequestAccepted = false };
if (request.Token == publicAuthToken)
{
number = request.Number;
resp = true;
}
return await Task.FromResult(resp);
}
Every time that a new number arrives i want to show it in the UI.
Another way i could do this was, within a while condition, i could do a call to the server requesting a new number just like the SayHello request, that simply awaits for a server response, that only will come when he has a new number to send. When it comes the UI is updated. I'm just reluctant to do it this way because i'm afraid that for some reason the client request is forgotten and the client just sit's there waiting for a response that will never come. I know that i could implement a timeout on the client side to handle that, and on the server maybe i could pause the response, with a thread pause or something like that, and when the method that generates the new number has a new number, it could unpause the response to the client(no clue on how to do that). This last solution looks to me much more difficult to do than the first one.
What are your thoughts about it? And solutions..
##################### UPDATE ##########################
Now i'm trying to use a singleton, grab its instance in the Blazor page, and subcribe to a inner event of his.
This is the singleton:
public class ThreadSafeSingletonString
{
private static ThreadSafeSingletonString _instance;
private static readonly object _padlock = new object();
private ThreadSafeSingletonString()
{
}
public static ThreadSafeSingletonString Instance
{
get
{
if (_instance == null)
{
lock(_padlock)
{
if (_instance == null)
{
_instance = new ThreadSafeSingletonString();
_instance.number="";
}
}
}
return _instance;
}
set
{
_instance.number= value.number;
_instance.NotifyDataChanged();
}
}
public int number{ get; set; }
public event Action OnChange;
private void NotifyDataChanged() => OnChange?.Invoke();
And in Blazor page in code section i have:
protected override void OnInitialized()
{
threadSafeSingleton.OnChange += updateNumber();
}
public System.Action updateNumber()
{
this.fromrefresh = threadSafeSingleton.number + " que vem.";
Console.WriteLine("Passou pelo UpdateNumber");
this.StateHasChanged();
return StateHasChanged;
}
Unfortunatly the updatenumber function never gets executed...
To force a refresh of the ui you can call the StateHasChanged() method on your component:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.components.componentbase.statehaschanged?view=aspnetcore-3.1
Notifies the component that its state has changed. When applicable, this will cause the component to be re-rendered.
Hope this helps
Simple Request
After fully understanding that your problem is just to Update the Page not to get unsyncronous messages from the server with a bi directional connection. So jou just have to change your page like (please not there is no need to change the files generated by gRPC, I called it Number.proto so my service is named NumberService):
async Task SayHello()
{
//Request via gRPC
var channel = new Channel(Host + ":" + Port, ChannelCredentials.Insecure);
var client = new this.NumberService.NumberServiceClient(channel);
var request = new Number{
identification = "ABC"
};
var result = await client.SendNumber(request).RequestAccepted;
await channel.ShutdownAsync();
//Update page
this.Greetmsg = result;
InvokeAsync(StateHasChanged);//Required to refresh page
}
Bi Directional
For making a continious bi directional connection you need to change the proto file to use streams like:
service ChatService {
rpc chat(stream ChatMessage) returns (stream ChatMessageFromServer);
}
This Chant sample is from the https://github.com/meteatamel/grpc-samples-dotnet
The main challenge on this is do divide the task waiting for the gRPC server from the client. I found out that BackgroundService is good for this. So create a Service inherited from BackgroundService where place the while loop waiting for the server in the ExecuteAsyncmethod. Also define a Action callback to update the page (alternative you can use an event)
public class MyChatService : BackgroundService
{
Random _random = new Random();
public Action<int> Callback { get; set; }
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
// Replace next lines with the code request and wait for server...
using (_call = _chatService.chat())
{
// Read messages from the response stream
while (await _call.ResponseStream.MoveNext(CancellationToken.None))
{
var serverMessage = _call.ResponseStream.Current;
var otherClientMessage = serverMessage.Message;
var displayMessage = string.Format("{0}:{1}{2}", otherClientMessage.From, otherClientMessage.Message, Environment.NewLine);
if (Callback != null) Callback(displayMessage);
}
// Format and display the message
}
}
}
}
On the page init and the BackgroundService and set the callback:
#page "/greeter"
#using System.Threading
<p>Current Number: #currentNumber</p>
#code {
int currentNumber = 0;
MyChatService myChatService;
protected override async Task OnInitializedAsync()
{
myChatService = new MyChatService();
myChatService.Callback = i =>
{
currentNumber = i;
InvokeAsync(StateHasChanged);
};
await myChatService.StartAsync(new CancellationToken());
}
}
More information on BackgroundService in .net core can be found here: https://gunnarpeipman.com/dotnet-core-worker-service/

Jersey2 Client reuse not working AsyncInvoker

I am trying to reuse a Jersey2(Jersey 2.16) Client for async invocation. However after 2 requests, I see that the threads going into a waiting state, waiting on a lock. Since client creation is an expensive operation, I am trying to reuse the client in the async calls. The issue occurs only with ApacheConnectorProvider as the connector class. I want to use ApacheConnectorProvider, as I need to use a proxy and set SSL properties and I want to use PoolingHttpClientConnectionManager.
The sample code is given below:
public class Example {
Integer eventId = 0;
private ClientConfig getClientConfig()
{
ClientConfig clientConfig = new ClientConfig();
ApacheConnectorProvider provider = new ApacheConnectorProvider();
clientConfig.property(ClientProperties.REQUEST_ENTITY_PROCESSING,RequestEntityProcessing.BUFFERED);
clientConfig.connectorProvider(provider);
return clientConfig;
}
private Client createClient()
{
Client client = ClientBuilder.newClient(getClientConfig());
return client;
}
public void testAsyncCall()
{
Client client = createClient();
System.out.println("Testing a new Async call on thread " + Thread.currentThread().getId());
JSONObject jsonObject = new JSONObject();
jsonObject.put("value", eventId++);
invoker(client, "http://requestb.in/nn0sffnn" , jsonObject);
invoker(client, "http://requestb.in/nn0sffnn" , jsonObject);
invoker(client, "http://requestb.in/nn0sffnn" , jsonObject);
client.close();
}
private void invoker(Client client, String URI, JSONObject jsonObject)
{
final Future<Response> responseFuture = client.target(URI)
.request()
.async()
.post(Entity.entity(jsonObject.toJSONString(), MediaType.TEXT_PLAIN));
try {
Response r = responseFuture.get();
System.out.println("Response is on URI " + URI + " : " + r.getStatus());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void main(String args[])
{
Example client1 = new Example();
client1.testAsyncCall();
return;
}
}
The response I see is:
Testing a new Async call on thread 1
Response is on URI http://requestb.in/nn0sffnn : 200
Response is on URI http://requestb.in/nn0sffnn : 200
On looking at the thread stack, I see the following trace:
"jersey-client-async-executor-0" prio=6 tid=0x043a4c00 nid=0x56f0 waiting on condition [0x03e5f000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x238ee148> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at org.apache.http.pool.PoolEntryFuture.await(PoolEntryFuture.java:133)
at org.apache.http.pool.AbstractConnPool.getPoolEntryBlocking(AbstractConnPool.java:282)
at org.apache.http.pool.AbstractConnPool.access$000(AbstractConnPool.java:64)
at org.apache.http.pool.AbstractConnPool$2.getPoolEntry(AbstractConnPool.java:177)
at org.apache.http.pool.AbstractConnPool$2.getPoolEntry(AbstractConnPool.java:170)
Can someone give me a suggestion as to how to reuse Client objects for async requests and may be how to get over this issue as well.

Signalr Context Thread Safety

We are using SignalR to push messages from server to client. Some of the things we are using server broadcast for:
Live notifications
Updating changes of shared data
Chat like functionality
One of our devs started with the StockTicker example, and we expanded it to push all of our different message types. Here is our general scheme:
private void RunJobs()
{
_jobs = GetAllJobs();
while (true)
{
bool workDone = false;
for (int i = 0; i < _jobs.Count; i++)
{
var j = _jobs.ElementAt(i);
bool workToDo = j.MessageAvailable();
workDone = workDone || workToDo;
if (workToDo)
{
var message = j.GetMessage();
_threadPool.QueueWorkItem(ProcessJob, j, message);
}
}
if (!workDone)
{
Thread.Sleep(_sleepTime);
}
}
}
/// <summary>
/// Method called by threads to process queued up Work Item (ISignalRJob)
/// </summary>
/// <param name="job">Job to run.</param>
private void ProcessJob(ISignalRJob job, QueueMessage message)
{
try
{
job.ProcessMessage(message);
}
catch (Exception e)
{
//handle exception
}
}
As each job processes, it performs an operation like:
protected override void ProcessMessage(QueueMessage message)
{
var nqm = JsonConvert.DeserializeObject<NotificationQueueMessage>(message.Body);
var notification = webService.GetNotification(notification.Id);
foreach(var userConnectionId in GetUserConnectionIds(nqm.UserId)){
_signalRConnectionContext.Clients.Client(userConnectionId).pushNotification(notification);
}
}
In a thread, monitor a series of queues for messages. If a message turns up, pop the message off the queue, and start a new thread to process the message (ProcessJob). The jobs will then do any service calls / db calls necessary to build the client message, then push the message to the client.
The service seems to work, but periodically the client will stop receiving the messages, although I have verified they are being sent from the server. Is it possible that pushing to a client connection in multiple threads is putting it in a bad state?
Should I perhaps be returning the result of the QueueMessage processing to the main SignalR thread, and return them all synchronously?

Resources