Are group subscriptions automatically handled on Reconnect? - signalr

I have a chat room using SignalR Hub for its messaging. Occasionally I get reports from users where it 'freezes'. Now this can be interpreted as no messages are coming through, I suspect as they have been dropped from a group.
My question is, does the connection get re-subscribed back into its groups automatically, or do you have to do something yourself in the Reconnect method:
public Task Reconnect(IEnumerable<string> groups)
{
return Clients.rejoined(Context.ConnectionId, DateTime.Now.ToString());
}

Yes, in 1.0.0.0-alpha1 you can enable auto rejoining of groups by using the new AutoRejoiningGroupsModule pipeline module using the EnableAutoRejoiningGroups extension method for the hub pipeline you build. This feature was not available in previous versions of the framework.
So you would end up with this somewhere in your startup code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
UPDATE:
Please note that the final version of SignalR 1.0 made auto-rejoining of groups the default behavior and so EnableAutoRejoiningGroups was removed. You can see this answer for more details.

Related

Kafka Listeners stop reading from topics after a few hours

An app I have been working on has started causing issues in our staging and production environment that are seemingly due to Kafka listeners no longer reading anything from their assigned topics after a few hours from the app starting.
The app is running in a cloud foundry environment and it has 13 #KafkaListener, reading from multiple topics based on their given pattern. The amount of topics is equal (each user on the app creates its own topic for each of the 13 listeners using the pattern). Topics have 3 partitions. Auto-scaling is also used, with a minimum of 2 instances of the app running at the same time. One of the topics is under heavier load than the others, receiving between 1 to 200 messages each second. The processing time for each message is short, as we receive batches and the processing part only proceeds to write the batch to a DB.
The current issue is, as stated, that it works for a while after starting and then suddenly the listeners are no longer picking up messages. With no apparent error or warning in the logs. A temporary endpoint was created where KafkaListenerEndpointRegistry is used to look at the Listener Containers, and all of them seem to be running and have proper partitions assigned. Doing a .stop() and .start() on the containers leads to one additional batch of messages being processed, and then nothing else.
The following are the configs used:
#Bean
public ConsumerFactory<String, String> consumerFactory(){
return new DefaultKafkaConsumerFactory<>(kafkaConfig.getConfiguration());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(){
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(5000);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
}
The kafkaConfig sets the following settings:
PARTITION_ASSIGNMENT_STRATEGY_CONFIG: RoundRobinAssignor
MAX_POLL_INTERVAL_MS_CONFIG: 60000
MAX_POLL_RECORDS_CONFIG: 10
MAX_PARTITION_FETCH_BYTES_CONFIG: Integer.MAX_VALUE
ENABLE_AUTO_COMMIT_CONFIG: false
METADATA_MAX_AGE_CONFIG: 15000
REQUEST_TIMEOUT_MS_CONFIG: 30000
HEARTBEAT_INTERVAL_MS_CONFIG: 15000
SESSION_TIMEOUT_MS_CONFIG: 60000
Additionally, each listener is in its own class and has the listen method as follows:
#KafkaListener(id="<patternName>-container", topicPattern = "<patternName>.*", groupId = "<patternName>Group")
public void listen(#Payload List<String> payloads,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topics,
Acknowledgement acknowledgement){
//processPayload...
acknowledgement.acknowledge();
}
The spring-kakfa version is 2.7.4.
Is there an issue with this config that could solve the issue? I have recently tried multiple changes with no success, changing these config settings around, moving the #KafkaListener annotation at class level, restarting the Listener Containers when they stop reading, and even having all the processing on the messages be done asynchronously and acknowledging the messages the moment they are picked up by the listener method. There were no errors or warning logs, and I wasn't able to see anything helpful on debug logging due to the amount of messages sent each second. We also have another app running the same settings in the same environments, but only 3 listeners (different topic patterns), where this issue does not occur. It is under a similar load, as the messages received by those 3 listeners are being output to the topic causing the large load on the app with the problem.
I would very much appreciate any help or pointers to what else I can do, since this issue is blocking us heavily in our production. Let me know if I missed something that could help.
Thank you.
Most problems like this are due to the listener thread being stuck in user code somplace; take a thread dump when this happens to see what the threads are doing.

How to deduplicate events when using RabbitMQ Publish/Subscribe Microservice Event Bus

I have been reading This Book on page 58 to understand how to do asynchronous event integration between microservices.
Using RabbitMQ and publish/subscribe patterns facilitates pushing events out to subscribers. However, given microservice architectures and docker usage I expect to have more than once instance of a microservice 'type' running. From what I understand all instances will subscribe to the event and therefore would all receive it.
The book doesn't clearly explain how to ensure only one of the instances handle the request.
I have looked into the duplication section, but that describes a pattern that explains how to deduplicate within a service instance but not necessarily against them...
Each microservice instance would subscribe using something similar to:
public void Subscribe<T, TH>()
where T : IntegrationEvent
where TH : IIntegrationEventHandler<T>
{
var eventName = _subsManager.GetEventKey<T>();
var containsKey = _subsManager.HasSubscriptionsForEvent(eventName);
if (!containsKey)
{
if (!_persistentConnection.IsConnected)
{
_persistentConnection.TryConnect();
}
using (var channel = _persistentConnection.CreateModel())
{
channel.QueueBind(queue: _queueName,
exchange: BROKER_NAME,
routingKey: eventName);
}
}
_subsManager.AddSubscription<T, TH>();
}
I need to understand how a multiple microservice instances of the same 'type' of microservice can deduplicate without loosing the message if the service goes down while processing.
From what I understand all instances will subscribe to the event and
therefore would all receive it.
Only one instance of subscriber will process the message/event. When you have multiple instances of a service running and subscribed to same subscription the first one to pick the message will set the message invisible from the subscription (called visibility timeout). If the service instance is able to process the message in given time it will tell the queue to delete the message and if it's not able to process the message in time , the message will re-appear in queue for any instance to pick it up again.
All standard service bus (rabbitMQ, SQS, Azure Serivce bus etc) provide this feature out of box.
By the way i have read this book and used the above code from eShotContainers and it works the way i described.
You should look into following pattern as well
Competing Consumers pattern
Hope that helps!

Spring Boot + Spring Session HttpSessionListener not working

I have a spring boot (1.3.2) app in which I have implemented a HttpSessionListener. I registered the listener from a #Configration class
#Configuration
#EnableRedisHttpSession
public class ApplicationSessionConfiguration {
#Bean
public ServletListenerRegistrationBean<HttpSessionListener> sessionListener() {
return new ServletListenerRegistrationBean<HttpSessionListener>(new SessionListener());
}
}
I have debugged into ServletListenerRegistrationBean.onInitialize method and the listener is getting registered with the ServletContext. Problem is now when I make a dummy REST call to the app the session gets created properly and sent back as a SESSION cookie, but the HttpSessionListener.createSession method never gets called. I am not sure what I am missing here.
Looks like the feature you need is not yet released in a stable build. However as per this ticket this is fixed and is available in 1.1.0 M1 release for spring-session. You may want to try 1.1.0.RC1 release of spring-session to see if this helps what you want. Exact details on how to get this done can be found in this doc link
In case using 1.1.0.RC1 release is NOT an option (or if you prefer not use RC1 due to what ever reason), you can still intercept session creation and destroy events by extending default CookieHttpSessionStrategy with you own implementation (say MyCookieHttpSessionStrategy) and then overriding the onNewSession(..) and onInvalidateSession(..) to intercept these events. Register MyCookieHttpSessionStrategy as normal bean and you are all set (it'll be automatically picked up by Redis session repository). This works just fine with Redis sessions, I am using these events in my spring boot web app this way.
Hope this helps!!

In Meteor Methods this.connection is always undefined

I am trying to implement some methods for a DDP API, for use with a C# remote client.
Now, I want to be able to track the connection to implement some type of persistent session, to this end, id hope to be able to use the session id given by DDP on connection, example:
{
"msg": "connected",
"session": "CmnXKZ34aqSnEqscR"
}
After reading the documentation, I see that inside meteor methods, I can access the current connection using "this.connection", however, I always get an undefined "this.connection".
Was it removed? If so, how can i access it now?
PS: I dont want to login as a user and access this.userId, since the app I want to create should not login, but actually just get a document id and do work associated with that, including changes to other collections, but all, regarding ONLY this id, and I dont want to have to include this id every time I call a function, since, this could possibly lead security problems if anyone can just send any id. The app would ideally do a simple login, then associate token details with his "session".
Changing from:
() => { this.connection; }
to:
function() { this.connection; }
solves the problem from me. Based on a comment in the accepted answer.
The C# client on github has a few bugs with it as it doesn't follow the DDP spec exactly. When you send commands to it to connect and run a call, it usually sends the '.call' too soon.
The method does work if you do it this way with this.connection on the server side Meteor method.
You need to make sure you send the method calls after you know that you are actually connected. This is what works at least with Meteor 0.8.2
I was using a file named ".next.js" to force meteor to use the newest unsupported javascript spec using a package.
Somehow this messed it up. Changed back to default javascript and it now works.
Thank you :)
init.coffee
Meteor.startup ->
# client init
if Meteor.isClient
Meteor.call "init"
methods.coffee
Meteor.methods
init: ->
console.log #connection.httpHeaders.host
it's that easy...

ActiveMQ Override scheduled message

I am trying to implement delayed queue with overriding of messages using Active MQ.
Each message is scheduled to be delivered with delay of x (say 60 seconds)
In between if same message is received again it should override previous message.
So even if I receive 10 messages say in x seconds. Only one message should be processed.
Is there clean way to accomplish this?
The question has two parts that need to be addressed separately:
Can a message be delayed in ActiveMQ?
Yes - see Delay and Schedule Message Delivery. You need to set <broker ... schedulerSupport="true"> in your ActiveMQ config, as well as setting the AMQ_SCHEDULED_DELAY property of the JMS message saying how long you want the message to be delayed (10000 in your case).
Is there any way to prevent the same message being consumed more than once?
Yes, but that's an application concern rather than an ActiveMQ one. It's often referred to as de-duplication or idempotent consumption. The simplest way if you only have one consumer is to keep track of messages received in a map, and check that map whether you receive a message. It it has been seen, discard.
For more complex use cases where you have multiple consumers on different machines, or you want that state to survive application restart, you will need to keep a table of messages seen in a database, and query it each time.
Please vote this answer up if it helps, as it encourages people to help you out.
Also according to method from ActiveMQ BrokerService class you should configure persistence to have ability to use scheduler functionality.
public boolean isSchedulerSupport() {
return this.schedulerSupport && (isPersistent() || jobSchedulerStore != null);
}
you can configure activemq broker to enable "schedulerSupport" with the following entry in your activemq.xml file located in conf directory of your activemq home directory.
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
You can Override the BrokerService in your configuration
#Configuration
#EnableJms
public class JMSConfiguration {
#Bean
public BrokerService brokerService() throws Exception {
BrokerService brokerService = new BrokerService();
brokerService.setSchedulerSupport(true);
return brokerService;
}
}

Resources