Stateful-Retry with DeadLetterPublishingRecoverer causing RetryCacheCapacityExceededException - spring-kafka

My container factory has a SeekToCurrentErrorHandler that uses a DeadLetterPublishingRecoverer to publish to a DLT, certain 'NotRetryableException' type exceptions and keep seeking the same offset for other kind of exceptions infinite number of times. With this setup, after a certain amount of payloads that result in non-retryable exceptions, the map that stores the retry context - MapRetryContextCache (spring-retry) overflows throwing a RetryCacheCapacityExceededException. From the initial looks it, retry-contexts of messages to be handled by the DLT recoverer are not being removed from MapRetryContextCache. Either that or my configuration is incorrect.
SeekToCurrentErrorHandler eh = new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate),-1);
eh.addNotRetryableException(SomeNonRetryableException.class);
eh.setCommitRecovered(true);
ConcurrentKafkaListenerContainerFactory<String, String> factory
= getContainerFactory();
factory.setErrorHandler(eh);
factory.setRetryTemplate(retryTemplate);
factory.setStatefulRetry(true);

In order to clear the cache, you must do the recovery in the retry template, not in the error handler.
#SpringBootApplication
public class So56846940Application {
public static void main(String[] args) {
SpringApplication.run(So56846940Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so56846940").partitions(1).replicas(1).build();
}
#Bean
public NewTopic topicDLT() {
return TopicBuilder.name("so56846940.DLT").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template,
ConcurrentKafkaListenerContainerFactory<String, String> factory,
DeadLetterPublishingRecoverer recoverer) {
factory.setRetryTemplate(new RetryTemplate());
factory.setStatefulRetry(true);
factory.setRecoveryCallback(context -> {
recoverer.accept((ConsumerRecord<?, ?>) context.getAttribute("record"),
(Exception) context.getLastThrowable());
return null;
});
return args -> IntStream.range(0, 5000).forEach(i -> template.send("so56846940", "foo"));
}
#KafkaListener(id = "so56846940", topics = "so56846940")
public void listen(String in) {
System.out.println(in);
throw new RuntimeException();
}
#Bean
public DeadLetterPublishingRecoverer recoverer(KafkaTemplate<String, String> template) {
return new DeadLetterPublishingRecoverer(template);
}
#Bean
public SeekToCurrentErrorHandler eh() {
return new SeekToCurrentErrorHandler(4);
}
}
The error handler must retry at least as many times as the retry template so that the retries are exhausted and we clear the cache.
You should also configure the RetryTemplate with the same not retryable exceptions as the error handler.
We will clarify in the reference manual.

Related

How to test a kafka consumer against a real kafka broker running on a server?

I have difficulty understanding some Kafka concepts in Java Spring Boot. I’d like to test a consumer against a real Kafka broker running on a server, which has some producers that write / have already written data to various topics. I would like to establish a connection with the server, consume the data, and verify or process its content in a test.
An enormous majority of examples (actually all I have seen so far) in the internet refer to embedded kafka, EmbeddedKafkaBroker, and show both a producer and a consumer implemented on one machine, locally. I haven’t found any example that would explain how to make a connection with a remote kafka server and read data from a particular topic.
I've written some code and I've printed the broker address with:
System.out.println(embeddedKafkaBroker.getBrokerAddress(0));
What I got is 127.0.0.1:9092, which means that it is local, so the connection with the remote server has not been established.
On the other hand, when I run the SpringBootApplication I get the payload from the remote broker.
Receiver:
#Component
public class Receiver {
private static final String TOPIC_NAME = "X";
private static final Logger LOGGER = LoggerFactory.getLogger(Receiver.class);
private CountDownLatch latch = new CountDownLatch(1);
public CountDownLatch getLatch() {
return latch;
}
#KafkaListener(topics = TOPIC_NAME)
public void receive(final byte[] payload) {
LOGGER.info("received the following payload: '{}'", payload);
latch.countDown();
}
}
Config:
#EnableKafka
#Configuration
public class ByteReceiverConfig {
#Autowired
EmbeddedKafkaBroker kafkaEmbeded;
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Value("${spring.kafka.consumer.group-id}")
private String groupIdConfig;
#Bean
public KafkaListenerContainerFactory<?> kafkaListenerContainerFactory() {
final ConcurrentKafkaListenerContainerFactory<Object, Object> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean
ConsumerFactory<Object, Object> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
#Bean
Map<String, Object> consumerProperties() {
final Map<String, Object> properties =
KafkaTestUtils.consumerProps("junit-test", "true", this.kafkaEmbeded);
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
return properties;
}
Test:
#EnableAutoConfiguration
#EnableKafka
#SpringBootTest(classes = {ByteReceiverConfig.class, Receiver.class})
#EmbeddedKafka
#ContextConfiguration(classes = ByteReceiverConfig.class)
#TestPropertySource(properties = { "spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}",
"spring.kafka.consumer.group-id=EmbeddedKafkaTest"})
public class KafkaTest {
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#Autowired
EmbeddedKafkaBroker embeddedKafkaBroker;
#Autowired
Receiver receiver;
#BeforeEach
void waitForAssignment() {
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
System.out.println(messageListenerContainer.getAssignedPartitions().isEmpty());
System.out.println(messageListenerContainer.toString());
System.out.println(embeddedKafkaBroker.getTopics().size());
System.out.println(embeddedKafkaBroker.getPartitionsPerTopic());
System.out.println(embeddedKafkaBroker.getBrokerAddress(0));
System.out.println(embeddedKafkaBroker.getBrokersAsString());
ContainerTestUtils.waitForAssignment(messageListenerContainer,
embeddedKafkaBroker.getPartitionsPerTopic());
}
#Test
public void testReceive() {
}
}
I would like somebody to shed some light on the following issues:
1.Can an instance of the class EmbeddedKafkaBroker be used to test data that comes from a remote broker, or is it only used for local tests, in which I would procude i.e send data to a topic that I created and consume data myself?
2.Is it possible to write a test class for a real kafka server? For instance to verify if a connection has been establish, or if a data has been read from a specific topic. What annotations, configurations, and classes would be needed in such case?
3.If I only want to consume data, do I have to provide the producer configuration in a config file (it would be strange, but all examples I have encountered so far did it)?
4.Do you know any resources (books, websites etc.) that show real examples of using kafka i.e. with a remote kafka server, with a procuder or a consumer only?
You don't need an embedded broker at all if you want to talk to an external broker only.
Yes, just set the bootstrap servers property appropriately.
No, you don't need producer configuration.
EDIT
#SpringBootApplication
public class So56044105Application {
public static void main(String[] args) {
SpringApplication.run(So56044105Application.class, args);
}
#Bean
public NewTopic topic() {
return new NewTopic("so56044105", 1, (short) 1);
}
}
spring.kafka.bootstrap-servers=10.0.0.8:9092
spring.kafka.consumer.enable-auto-commit=false
#RunWith(SpringRunner.class)
#SpringBootTest(classes = { So56044105Application.class, So56044105ApplicationTests.Config.class })
public class So56044105ApplicationTests {
#Autowired
public Config config;
#Test
public void test() throws InterruptedException {
assertThat(config.latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(config.received.get(0)).isEqualTo("foo");
}
#Configuration
public static class Config implements ConsumerSeekAware {
List<String> received = new ArrayList<>();
CountDownLatch latch = new CountDownLatch(3);
#KafkaListener(id = "so56044105", topics = "so56044105")
public void listen(String in) {
System.out.println(in);
this.received.add(in);
this.latch.countDown();
}
#Override
public void registerSeekCallback(ConsumerSeekCallback callback) {
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
System.out.println("Seeking to beginning");
assignments.keySet().forEach(tp -> callback.seekToBeginning(tp.topic(), tp.partition()));
}
#Override
public void onIdleContainer(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
}
}
}
There are some examples in this repository for bootstrapping real Kafka producers and consumers across a variety of configurations — plaintext, SSL, with and without authentication, etc.
Note: the repo above contains examples for the Effective Kafka book, which I am the author of. However, they can be used freely without the book and hopefully they make just as much sense on their own.
More to the point, here are a pair of examples for a basic producer and a consumer.
/** A sample Kafka producer. */
import static java.lang.System.*;
import java.util.*;
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.serialization.*;
public final class BasicProducerSample {
public static void main(String[] args) throws InterruptedException {
final var topic = "getting-started";
final Map<String, Object> config =
Map.of(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092",
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName(),
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName(),
ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
try (var producer = new KafkaProducer<String, String>(config)) {
while (true) {
final var key = "myKey";
final var value = new Date().toString();
out.format("Publishing record with value %s%n",
value);
final Callback callback = (metadata, exception) -> {
out.format("Published with metadata: %s, error: %s%n",
metadata, exception);
};
// publish the record, handling the metadata in the callback
producer.send(new ProducerRecord<>(topic, key, value), callback);
// wait a second before publishing another
Thread.sleep(1000);
}
}
}
}
/** A sample Kafka consumer. */
import static java.lang.System.*;
import java.time.*;
import java.util.*;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.serialization.*;
public final class BasicConsumerSample {
public static void main(String[] args) {
final var topic = "getting-started";
final Map<String, Object> config =
Map.of(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092",
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName(),
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName(),
ConsumerConfig.GROUP_ID_CONFIG, "basic-consumer-sample",
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest",
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
try (var consumer = new KafkaConsumer<String, String>(config)) {
consumer.subscribe(Set.of(topic));
while (true) {
final var records = consumer.poll(Duration.ofMillis(100));
for (var record : records) {
out.format("Got record with value %s%n", record.value());
}
consumer.commitAsync();
}
}
}
}
Now, these are obviously not unit tests. But with very little rework they could be turned into one. The next step would be to remove Thread.sleep() and add assertions. Note, since Kafka is inherently asynchronous, naively asserting a published message in a consumer immediately after publishing will fail. For a robust, repeatable test, you may want to use something like Timesert.

Message Retry policies in Spring AMQP

I am using spring-amqp to consume messages from RabbitMQ in my web application.
Web application consists of multiple components in it such as (Redis, OracleDB)
Now i have a scenario, if any exception occurs due to infrastructure like Oracle server is down, Redis connection issue, i want to push message back to the same queue and after certain specified delay i want to consume the message back.
After certain delay then also the message is leading to same exception, probably i want to use maximum attempts option or do the same as above push the message back to queue and send a mail to administrator stating "Infrastructure Issue".
Does Spring AMQP supports above scenario.? If yes please provide me how to come up with such or similar solutions.
I tried below piece of code. Message is not going for dead letter queue instead it is re-queuing to same queue causing infinite loop. Please correct me where am i going wrong
Configuration class
#Configuration
public class MQConfig {
public static final String OUTGOING_QUEUE = "my.outgoing.example";
public static final String INCOMING_QUEUE = "my.incoming.example";
public static final String DEAD_LETTER_QUEUE = "my.deadletter.queue.example";
#Autowired
private ConnectionFactory cachingConnectionFactory;
// Setting the annotation listeners to use the jackson2JsonMessageConverter
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(cachingConnectionFactory);
factory.setMessageConverter(jackson2JsonMessageConverter());
factory.setDefaultRequeueRejected(false);
return factory;
}
// Standardize on a single objectMapper for all message queue items
#Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public Queue outgoingQueue() {
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-dead-letter-exchange", "dlx");
args.put("x-dead-letter-routing-key", DEAD_LETTER_QUEUE);
args.put("x-message-ttl", 50000);
return new Queue(OUTGOING_QUEUE, false, false, false, args);
}
#Bean
public RabbitTemplate outgoingSender() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(cachingConnectionFactory);
rabbitTemplate.setQueue(outgoingQueue().getName());
// rabbitTemplate.setRoutingKey(outgoingQueue().getName());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public Queue incomingQueue() {
return new Queue(INCOMING_QUEUE);
}
#Bean
public Queue deadLetterQueue() {
return new Queue(DEAD_LETTER_QUEUE);
}
#Bean
public DirectExchange dlx() {
return new DirectExchange(DEAD_LETTER_QUEUE);
}
#Bean
public Binding dlqBinding() {
return BindingBuilder.bind(deadLetterQueue()).to(dlx()).with(DEAD_LETTER_QUEUE);
}
}
Core logic
#Component
public class DeadLetterSendReceive {
private static final Logger LOGGER = LoggerFactory.getLogger(DeadLetterSendReceive.class);
#Autowired
private RabbitTemplate outgoingSender;
// Scheduled task to send an object every 5 seconds
#Scheduled(fixedDelay = 5000)
public void sender() {
Integer int1[] = new Integer[]{10,20,30,40,50};
for (int i = 0; i <= int1.length; i++){
System.out.println(int1[i]);
if(int1[i]/10 == 1){
throw new AmqpRejectAndDontRequeueException("to deadletter queue");
}
else{
ExampleObject ex = new ExampleObject();
ex.setValue(int1[i]);
LOGGER.info("Sending example object at " + ex.getValue());
outgoingSender.convertAndSend(ex);
}
}
}
// Annotation to listen for an ExampleObject
#RabbitListener(queues = MQConfig.INCOMING_QUEUE)
public void handleMessage(ExampleObject exampleObject) {
LOGGER.info("Received incoming object at " + exampleObject.getValue());
}
}
Pojo Class
import java.util.Date;
public class ExampleObject {
private Date date = new Date();
private int value;
public int getValue() {
return value;
}
public void setValue(int value) {
this.value = value;
}
public ExampleObject() {
}
#Override
public String toString() {
return "ExampleObject{" +
"date= " + date +
'}';
}
public Date getDate() {
return date;
}
public void setDate(Date date) {
this.date = date;
}
}
There are a couple of ways to do it; use the delayed message exchange plugin and publish the failed message to it. You can set a header to track how many attempts have been made.
Or you can do it with a dead letter queue with a TTL where the dead-letter queue is configured with dead-lettering to send the expired message back to the original queue. See my answer to this question and its link to another answer.
You can use the x-death header to track retries; it has been changed in recent brokers to now keep a count instead of keep adding new entries to the header.
To force the message to go to the DLQ, set defaultRequeueRejected to false or throw an AmqpRejectAndDontRequeueException.

exception handling for rabbitmq listener in spring

Working with spring, I am new to rabbitmq, i want to know where i am wrong.
I have written a rabbitmq connection factory, and a listener container containing a listener. I have also provided the listener container with an error handler but it doesnt seems to work.
My spring beans:
<rabbit:connection-factory id="RabbitMQConnectionFactory" virtual-host="${rabbitmq.vhost}" host="${rabbitmq.host}" port="${rabbitmq.port}" username="${rabbitmq.username}" password="${rabbitmq.password}"/>
<rabbit:listener-container missing-queues-fatal="false" declaration-retries="0" error-handler="errorHandlinginRabbitMQ" recovery-interval="10000" auto-startup="${rabbitmq.apc.autostartup}" max-concurrency="1" prefetch="1" concurrency="1" connection-factory="RabbitMQConnectionFactory" acknowledge="manual">
<rabbit:listener ref="apcRabbitMQListener" queue-names="${queue.tpg.rabbitmq.destination.apc}" exclusive="true" />
</rabbit:listener-container>
<bean id="errorHandlinginRabbitMQ" class="RabbitMQErrorHandler"/>
This is my RabbitMQErrorHandler class:
public class RabbitMQErrorHandler implements ErrorHandler
{
#Override
public void handleError(final Throwable exception)
{
System.out.println("error occurred in message listener and handled in error handler" + exception.toString());
}
}
What i assume is, if i provide invalid credentials to the connection factory, handleError method of the RabbitMQErrorHandler class should execute, and the server should start properly, however, when i try to run the server, the method does not executes(the exception is thrown in console) and the server is not able to start. Where am i missing something and what that might be?
The error handler is for handling errors during message delivery; since you haven't connected yet, there is no message for which to handle an error.
To get connection exceptions, you should implement ApplicationListener<ListenerContainerConsumerFailedEvent> and you will receive the failure as an event if you add it as a bean to the application context.
You will get other events (consumer started, consumer stopped etc) if you implement ApplicationListener<AmqpEvent>.
EDIT
<rabbit:listener-container auto-startup="false">
<rabbit:listener id="fooContainer" ref="foo" method="handleMessage"
queue-names="si.test.queue" />
</rabbit:listener-container>
<bean id="foo" class="com.example.Foo" />
Foo:
public class Foo {
public final CountDownLatch latch = new CountDownLatch(1);
public void handleMessage(String foo) {
System.out.println(foo);
this.latch.countDown();
}
}
App:
#SpringBootApplication
#ImportResource("context.xml")
public class So43208940Application implements CommandLineRunner {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So43208940Application.class, args);
context.close();
}
#Autowired
private SimpleMessageListenerContainer fooContainer;
#Autowired
private CachingConnectionFactory connectionFactory;
#Autowired
private RabbitTemplate template;
#Autowired
private Foo foo;
#Override
public void run(String... args) throws Exception {
this.connectionFactory.setUsername("junk");
try {
this.fooContainer.start();
}
catch (Exception e) {
e.printStackTrace();
}
Thread.sleep(5000);
this.connectionFactory.setUsername("guest");
this.fooContainer.start();
System.out.println("Container started");
this.template.convertAndSend("si.test.queue", "foo");
foo.latch.await(10, TimeUnit.SECONDS);
}
}

Rest service with oauth2: Failed to find access token for token

I try to create spring rest service, whis is autenticated by my own oauth2 resources server. I created resource server:
#Configuration
#EnableResourceServer
protected static class ResourceServer extends ResourceServerConfigurerAdapter {
#Autowired
private TokenStore tokenStore;
#Override
public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
resources.tokenStore(tokenStore).resourceId("mobileapp");
}
#Override
public void configure(HttpSecurity http) throws Exception {
http.authorizeRequests().antMatchers("/api/shop /**").authenticated().and()
.authorizeRequests().antMatchers("/auth/**").anonymous();
}
}
and authorization server:
#Configuration
#EnableAuthorizationServer
protected static class OAuth2Config extends AuthorizationServerConfigurerAdapter {
#Autowired
private AuthenticationManager auth;
#Autowired
private DataSource dataSource;
#Autowired
private BCryptPasswordEncoder passwordEncoder;
#Bean
public JdbcTokenStore tokenStore() {
return new JdbcTokenStore(dataSource);
}
#Bean
protected AuthorizationCodeServices authorizationCodeServices() {
return new JdbcAuthorizationCodeServices(dataSource);
}
#Override
public void configure(AuthorizationServerSecurityConfigurer security) throws Exception {
security.passwordEncoder(passwordEncoder);
}
#Override
public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
endpoints
.authorizationCodeServices(authorizationCodeServices())
.authenticationManager(auth)
.tokenStore(tokenStore())
.approvalStoreDisabled();
}
#Override
public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
clients.jdbc(dataSource)
.passwordEncoder(passwordEncoder);
.withClient("mobile")
.authorizedGrantTypes("password", "refresh_token")
.authorities("ROLE_CLIENT")
.scopes("read", "write", "trust")
.autoApprove(true)
.resourceIds("mobileapp")
.secret("123456");
}
When I try to receive an access token from server, using curl:
curl -X POST -vu mobile:123456 http://localhost:8080/oauth/token -H
"Accept: application/json" -d
"password=test123&username=admin#gmail.com&grant_type=password&scope=read&client_secret=123456&client_id=mobile"
I get this error as a response message:
{"error":"server_error","error_description":"java.io.NotSerializableException:
org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder"}
In tomcat logs there is also
o.s.s.o.p.token.store.JdbcTokenStore - Failed to find access token for
token
EDIT:
Bean definition of password encoder:
#Bean
public BCryptPasswordEncoder passwordEncoder() {
BCryptPasswordEncoder bCryptPasswordEncoder = new BCryptPasswordEncoder();
return bCryptPasswordEncoder;
}
This bean is created in class, in which OAuth2Config and ResourceServer are declared.
I checked code and found out which table spring uses and the table is empty. My question is: should it be auto generated or there is a problem with my code?
Thanks in advance for help.
Override JdbcTokenStore class and replace this function with.
public OAuth2AccessToken readAccessToken(String tokenValue) {
OAuth2AccessToken accessToken = null;
try {
accessToken = new DefaultOAuth2AccessToken(tokenValue);
}
catch (EmptyResultDataAccessException e) {
if (LOG.isInfoEnabled()) {
LOG.info("Failed to find access token for token "+tokenValue);
}
}
catch (IllegalArgumentException e) {
LOG.warn("Failed to deserialize access token for " +tokenValue,e);
removeAccessToken(tokenValue);
}
return accessToken;
}
Your problem of failed to find access token is resolved.Use this class in OAuth2Config.
Your model must have BCryptPasswordEncoder which is not serialized. Make it transient in your user bmodel.
private transient BCryptPasswordEncoder passwordEncoder;
The solution, as far as Postgres is concerned, use BYTEA for ALL token and authentication columns.
The columns are defined as LONGVARBINARY in this schema reference: https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/test/resources/schema.sql
In other words, replace LONGVARBINARY with BYTEA if you are using Postgres.
Cheers

Accesses Session Listern Managed by Container

AFAIK, the httpsessionlisterner implementation listener class is get instantiated when the first session is created.
Therefore, i would like access this instance because i need to count how many active session and display it some where and i would like to check which user is currently login. In the code below, there is list instance variable, i need to access this listener class in order to access the private variable.
#WebListener()
public class SessionListener implements HttpSessionListener, HttpSessionAttributeListener {
private List<HttpSession> sessionList;
public SessionListener() {
sessionList = new ArrayList<HttpSession>();
}
#Override
public void sessionCreated(HttpSessionEvent se) {
sessionList.add(se.getSession());
}
#Override
public void sessionDestroyed(HttpSessionEvent se) {
sessionList.remove(se.getSession());
}
#Override
public void attributeAdded(HttpSessionBindingEvent event) {
}
#Override
public void attributeRemoved(HttpSessionBindingEvent event) {
}
#Override
public void attributeReplaced(HttpSessionBindingEvent event) {
}
/**
* #return the sessionList
*/
public List<HttpSession> getSessionList() {
return Collections.unmodifiableList(sessionList);
}
Please help.
Thanks.
I have to make a few assumptions as you don't say how your authentication method works.
I will assume that your username will be contained in your HttpServletRequest (this is very common). Unless you have specifically coded your session to contain the username it will not contain the username of the authenticated user - the username is usually confined to the HttpServletRequest. Therefore you will not usually achieve your goal by using an HttpSessionListener. You probably know this but there are various "scopes".
application scope (ServletContext) - per application
session scope (HttpSession) - per session
request scope (HttpServletRequest) - per request
As I said, the username is usually stored in the request scope. You can access the session and application scopes from the request scope. You cannot access the request scope from the session scope (as this doesn't make sense!).
To solve your problem I would create a Map stored in the application scope and use a ServletFilter to populate it. You might want to use a time based cache (using the session time-out value) rather than a straight map as mostly sessions are started but timeout rather than get explicitly terminated by the user. kitty-cache is a really simple time based cache that you could use for this purpose.
Anyway a code sketch (untested) might look something like this:
public class AuthSessionCounter implements Filter {
private static final String AUTHSESSIONS = "authsessions";
private static ServletContext sc;
public void init(FilterConfig filterConfig) throws ServletException {
sc = filterConfig.getServletContext();
HashMap<String, String> authsessions = new HashMap<String, String>();
sc.setAttribute(AUTHSESSIONS, authsessions);
}
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
HttpServletRequest hsr = (HttpServletRequest) request;
if (hsr.getRemoteUser() != null) {
HttpSession session = hsr.getSession();
HashMap<String, String> authsessions = (HashMap<String, String>) sc.getAttribute(AUTHSESSIONS);
if (!authsessions.containsKey(session.getId())) {
authsessions.put(session.getId(), hsr.getRemoteUser());
sc.setAttribute(AUTHSESSIONS, authsessions);
}
}
chain.doFilter(request, response);
}
public void destroy(){}
}
You should now be able to obtain details of who and how many users are logged in from the authsessions Map that is stored in the application scope.
I hope this helps,
Mark
UPDATE
My authentication is works by checking
the username and password in servlet
and create a new session for it.
In which case a HttpSessionListener might work for you - although as I mentioned before you probably still need to use a time based cache due to the way that most user sessions timeout rather than terminate. My untested code sketch would now look something like this:
public class SessionCounter
implements HttpSessionListener, HttpSessionAttributeListener {
private static final String AUTHSESSIONS = "authsessions";
private static final String USERNAME = "username";
private static ServletContext sc;
public void sessionCreated(HttpSessionEvent se) {
if (sc == null) {
sc = se.getSession().getServletContext();
HashMap<String, String> authsessions = new HashMap<String, String>();
sc.setAttribute(AUTHSESSIONS, authsessions);
}
}
public void sessionDestroyed(HttpSessionEvent se) {
HttpSession session = se.getSession();
HashMap<String, String> authsessions =
(HashMap<String, String>) sc.getAttribute(AUTHSESSIONS);
authsessions.remove(session.getId());
sc.setAttribute(AUTHSESSIONS, authsessions);
}
public void attributeAdded(HttpSessionBindingEvent se) {
if (USERNAME.equals(se.getName())) {
HttpSession session = se.getSession();
HashMap<String, String> authsessions =
(HashMap<String, String>) sc.getAttribute(AUTHSESSIONS);
authsessions.put(session.getId(), (String) se.getValue());
sc.setAttribute(AUTHSESSIONS, authsessions);
}
}
public void attributeRemoved(HttpSessionBindingEvent se) {}
public void attributeReplaced(HttpSessionBindingEvent se) {}
}

Resources