Signalr Context Thread Safety - signalr

We are using SignalR to push messages from server to client. Some of the things we are using server broadcast for:
Live notifications
Updating changes of shared data
Chat like functionality
One of our devs started with the StockTicker example, and we expanded it to push all of our different message types. Here is our general scheme:
private void RunJobs()
{
_jobs = GetAllJobs();
while (true)
{
bool workDone = false;
for (int i = 0; i < _jobs.Count; i++)
{
var j = _jobs.ElementAt(i);
bool workToDo = j.MessageAvailable();
workDone = workDone || workToDo;
if (workToDo)
{
var message = j.GetMessage();
_threadPool.QueueWorkItem(ProcessJob, j, message);
}
}
if (!workDone)
{
Thread.Sleep(_sleepTime);
}
}
}
/// <summary>
/// Method called by threads to process queued up Work Item (ISignalRJob)
/// </summary>
/// <param name="job">Job to run.</param>
private void ProcessJob(ISignalRJob job, QueueMessage message)
{
try
{
job.ProcessMessage(message);
}
catch (Exception e)
{
//handle exception
}
}
As each job processes, it performs an operation like:
protected override void ProcessMessage(QueueMessage message)
{
var nqm = JsonConvert.DeserializeObject<NotificationQueueMessage>(message.Body);
var notification = webService.GetNotification(notification.Id);
foreach(var userConnectionId in GetUserConnectionIds(nqm.UserId)){
_signalRConnectionContext.Clients.Client(userConnectionId).pushNotification(notification);
}
}
In a thread, monitor a series of queues for messages. If a message turns up, pop the message off the queue, and start a new thread to process the message (ProcessJob). The jobs will then do any service calls / db calls necessary to build the client message, then push the message to the client.
The service seems to work, but periodically the client will stop receiving the messages, although I have verified they are being sent from the server. Is it possible that pushing to a client connection in multiple threads is putting it in a bad state?
Should I perhaps be returning the result of the QueueMessage processing to the main SignalR thread, and return them all synchronously?

Related

.NET 6 Async Semaphore Error Under Mild Load

I'm working on a basic (non DB) connection pool which allows only 1 connection to be created per project.
The connection pool supports an async-task/threaded environment and therefor I have made use of a semaphore instead of a regular Lock.
I wrote a test, below, which is meant to stress test the connection pool.
The code works but under higher loads, the semaphore throws the following error
I can overcome this error by decreasing the load.
For example, increasing the _waitTimeMs to a higher number (i.e. 50ms or 100ms or 1000ms) or decreasing _numberOfTasks (i.e. to 5 or 3).
I should also mention that sometimes, it manages to run higher load tests without errors.
Is there a mistake or misconception in my code and/or use of semaphores?
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
internal class Program
{
static int _numberOfTasks = 50;
static int _waitTimeMs = 1;
static SemaphoreSlim _dictLock = new SemaphoreSlim(1, 1);
static ConcurrentDictionary<string, bool> _pool = new ConcurrentDictionary<string, bool>();
/// <summary>
/// Only 1 connection allowed per project.
/// We reuse connections if available in pool, otherwise we create 1 new connection.
/// </summary>
static async Task<string> GetConnection(string projId)
{
try
{
// Enter sema lock to prevent more than 1 connection
// from being added for the same project.
if (await _dictLock.WaitAsync(_waitTimeMs))
{
// Try retrieve connection from pool
if (_pool.TryGetValue(projId, out bool value))
{
if (value == false)
return "Exists but not connected yet.";
else
return "Success, exists and connected.";
}
// Else add connection to pool
else
{
_pool.TryAdd(projId, false);
// Simulate delay in establishing new connection
await Task.Delay(2);
_pool.TryUpdate(projId, true, false);
return "Created new connection successfully & added to pool.";
}
}
// Report failure to acquire lock in time.
else
return "Server busy. Please try again later.";
}
catch (Exception ex)
{
return "Error " + ex.Message;
}
finally
{
// Ensure our lock is released.
_dictLock.Release();
}
}
static async Task Main(string[] args)
{
if (true)
{
// Create a collection of the same tasks
List<Task> tasks = new List<Task>();
for (int i = 0; i < _numberOfTasks; i++)
{
// Each task will try to get an existing or create new connection to Project1
var t = new Task(async () => { Console.WriteLine(await GetConnection("Project1")); });
tasks.Add(t);
}
// Execute these tasks in parallel.
Parallel.ForEach<Task>(tasks, (t) => { t.Start(); });
Task.WaitAll(tasks.ToArray());
Console.WriteLine("Done");
Console.Read();
}
}
}
Is there a mistake or misconception in my code and/or use of semaphores?
There's a bug in your code, yes. If the WaitAsync returns false (indicating that the semaphore was not taken), then the semaphore is still released in the finally block.
If you must use a timeout with WaitAsync (which is highly unusual and questionable), then your code should only call Release if the semaphore was actually taken.

Use a message for a topic in Confluent.Kafka when consumer run

I'm using Confluent.Kafka(1.4.4) in a .netCore project as a message broker. In the startup of the project I set only "bootstrapservers" to the specific servers which were in the appSetting.json file and I produce messages in an API when necessary with the code below in related class:
public async Task WriteMessage<T>(string topicName, T message)
{
using (var p = new ProducerBuilder<Null, string>(_producerConfig).Build())
{
try
{
var serializedMessage= JsonConvert.SerializeObject(message);
var dr = await p.ProduceAsync(topicName, new Message<Null, string> { Value = serializedMessage });
logger.LogInformation($"Delivered '{dr.Value}' to '{dr.TopicPartitionOffset}'");
}
catch (ProduceException<Null, string> e)
{
logger.LogInformation($"Delivery failed: {e.Error.Reason}");
}
}
}
I have also added the following code In the consumer solution :
public async Task Run()
{
using (var consumerBuilder = new ConsumerBuilder<Ignore, string>(_consumerConfig).Build())
{
consumerBuilder.Subscribe(new List<string>() { "ActiveMemberCardForPanClubEvent", "CreatePanClubEvent", "RemovePanClubEvent"
});
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
try
{
while (true)
{
try
{
var consumer = consumerBuilder.Consume(cts.Token);
if (consumer.Message != null)
{
using (LogContext.PushProperty("RequestId", Guid.NewGuid()))
{
//Do something
logger.LogInformation($"Consumed message '{consumer.Message.Value}' at: '{consumer.TopicPartitionOffset}'.");
await DoJob(consumer.Topic, consumer.Message.Value);
consumer.Topic.Remove(0, consumer.Topic.Length);
}
}
else
{
logger.LogInformation($"message is null for topic '{consumer.Topic}'and partition : '{consumer.TopicPartitionOffset}' .");
consumer.Topic.Remove(0, consumer.Topic.Length);
}
}
catch (ConsumeException e)
{
logger.LogInformation($"Error occurred: {e.Error.Reason}");
}
}
}
catch (OperationCanceledException)
{
// Ensure the consumer leaves the group cleanly and final offsets are committed.
consumerBuilder.Close();
}
}
}
I produce a message and when the consumer project is run everything goes perfectly and the message is being read in the consumer solution.
The problem is raised when the consumer project is not run and I queue a message in the API with the message producer in API. After running consumers there is not any valid message for that topic that it's message is being produced.
I am familiar with and have experiences with message brokers and I know that by sending a message it will be on the bus until it is being used but I don't understand why it doesn't work with Kafka in this project.
The default setting for the "auto.offset.reset" Consumer property is "latest".
That means (in the context of no offsets being written yet) if you write a message to some topic and then subsequently start the consumer, it will skip past any messages written before the consumer was started. This could be why your consumer is not seeing the messages queued by your producer.
The solution is to set "auto.offset.reset" to "earliest" which means that the consumer will start from the earliest offset on the topic.
https://docs.confluent.io/current/installation/configuration/consumer-configs.html#auto.offset.reset

Java gRPC: exception from client to server

Is it's possible to throw an exception from the client to the server?
We have an open stream from the server to the client:
rpc addStream(Request) returns (stream StreamMessage) {}
When i try something like this:
throw Status.INTERNAL.withDescription(e.getMessage()).withCause(e.getCause()).asRuntimeException();
I got the exception in the StreamObserver.onError on the client, but there is no exception on the server-side.
Servers can respond with a "status" that the stub API exposes as a StatusRuntimeException. Clients, however, can only "cancel" the RPC. Servers will not know the source of the cancellation; it could be because the client cancelled or maybe the TCP connection broke.
In a client-streaming or bidi-streaming call, the client can cancel by calling observer.onError() (without ever calling onCompleted()). However, if the client called onCompleted() or the RPC has a unary request, then you need to use ClientCallStreamObserver or Context:
stub.someRpc(request, new ClientResponseObserver<Request, Response>() {
private ClientCallStreamObserver<Request> requestStream;
#Override public void beforeStart(ClientCallStreamObserver<Request> requestStream) {
this.requestStream = requestStream;
}
...
});
// And then where you want to cancel
// RequestStream is non-thread-safe. For unary requests, wait until
// stub.someRpc() returns, since it uses the stream internally.
// The string is not sent to the server. It is just "echoed"
// back to the client's `onError()` to make clear that the
// cancellation was locally caused.
requestStream.cancel("some message for yourself", null);
// For thread-safe cancellation (e.g., for client-streaming)
CancellableContext ctx = Context.current().withCancellation();
StreamObserver requestObserver = ctx.call(() ->
stub.someRpc(new StreamObserver<Response>() {
#Override public void onCompleted() {
// The ctx must be closed when done, to avoid leaks
ctx.cancel(null);
}
#Override public void onError() {
ctx.cancel(null);
}
}));
// The place you want to cancel
ctx.cancel(ex);

Rebus - Bus with sql transport in MemoryCache callback

I have a message handler which accumulates the messages in a MemoryCache for a given time, so that only the last one will be handled.
When the callback happens i want to forward another message to an handler using sql transport, but the sql connection has now been closed.
The code looks something like this:
public IBus SqlBus { get; set; }
public async Task Handle(ServiceMessage message)
{
await base.Handle(() =>
{
cache.Set(CacheKey, message, new CacheItemPolicy()
{
AbsoluteExpiration = DateTimeOffset.Now.AddSeconds(10),
RemovedCallback = new CacheEntryRemovedCallback(CacheCallback),
});
return Task.FromResult(0);
}, message);
}
private void CacheCallback(CacheEntryRemovedArguments arguments)
{
if (arguments.RemovedReason == CacheEntryRemovedReason.Expired)
{
var message = arguments.CacheItem.Value as ServiceMessage;
SqlBus.Send(new AnotherMessage()).GetAwaiter().GetResult();
}
}
Is there any approaches which lets me do this?
When is the CacheCallback method called, and on which thread?
It sounds to me like the problem is that the thread calling CacheCallback has a value in AmbientTransactionContext.Current, which is where Rebus enlists queue operations when it can.
If the transaction context is was somehow preserved even though the handler finished executing, then the associated cached items (like e.g. the SqlConnection and SqlTransaction associated with the SQL transport) will be closed.

Throttling outgoing HTTP requests with Async Jersey HTTP Client using RxJava

I am trying to throttle outgoing http requests using Jersey Client. Since I am running is a Vertx Verticle I created a special RateLimiter class to handle throttling.
My goal is to prevent HTTP calls from being made at a greater rate than 1 per second. the idea is that a submitted callable will run using the single threaded ExecutorService so that I can block that single thread in order to guarantee that these tasks are not handled in a greater rate.
Basically the only public method in this class is "call" :
public <T> Observable<T> call(Callable<Observable<T>> action) {
return Observable.create(subscriber -> {
Observable<Observable<T>> observed =
Observable.from(executor.submit(() -> {
return action.call();
})
).doOnError(throwable -> {
logger.error(throwable);
}
);
observed.subscribe(t -> {
try {
Thread.sleep(1000);
t.subscribe(data -> {
try {
subscriber.onNext(data);
} catch (Throwable e) {
subscriber.onError(e);
}
subscriber.onCompleted();
});
} catch (Exception e) {
logger.error(e);
}
});
});
}
this is my current implementation which uses 1 second sleep no matter how much time has passed since the previous call. initially I tried using a ScheduledExecutorService and calculate the delay time so that I will submit requests exactly at the rate of 1 per second. however, in both cases it often fails to meet the rate restrictions and I get two requests submitted immediately one after the other.
My assumption is that somewhere the requests is being handed to a different executing queue which is being polled by a different thread continuously, so that if for some reason that thread was busy and two requests exist in the queue at the same time, they will be executed sequentially but with no delays.
Any Ideas how to resolve this? maybe a different approach?
I would go with simple Vertx event bus and a queue, from which you poll every second:
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new DebounceVerticle(), (r) -> {
// Ok, verticle is ready!
// Request to send 10 events in 1 second
for (int i = 0; i < 10; i++) {
vertx.eventBus().publish("call", UUID.randomUUID().toString());
}
});
}
private static class DebounceVerticle extends AbstractVerticle {
HttpClient client;
#Override
public void start() {
client = vertx.createHttpClient();
BlockingQueue<String> queue = new LinkedBlockingQueue<>();
vertx.eventBus().consumer("call", (payload) -> {
String message = (String) payload.body();
queue.add(message);
System.out.println(String.format("I got %s but I don't know when it will be executed", message));
});
vertx.setPeriodic(1000, (l) -> {
String message = queue.poll();
if (message != null) {
System.out.println(String.format("I'm finally sending %s", message));
//Do your client magic
}
});
}
}
Just prepend web service call with guava RateLimiter. Here's an example in RxJava which shows how events every 500ms are throttled to be once per second.
Function<Long, Long> throttlingFunction = new Function<Long, Long>() {
private RateLimiter limiter = RateLimiter.create(1.0);
public Long apply(Long t) throws Exception {
limiter.acquire();
return t;
}
};
Observable.interval(500, TimeUnit.MILLISECONDS)
.map(throttlingFunction)
.subscribe(new Consumer<Long>() {
public void accept(Long t) throws Exception {
System.out.println(t);
}
});
Also in vert.x all the blocking stuff is supposed to be run with the help of executeBlocking.

Resources