Declare a Map<String, Object> as a scoped variable cause problem - spring-mvc

When I declare a Map<String, Object> scoped variable in my SpringMVC project as below:
#Bean
#SessionScope
public Map<String, Object> allProjects() {
return new TreeMap<>();
}
It is weird that it contains many unexpected things even I didn't put anything into it. Just like it is the whole session scope. It will not happen if I declare it as Map<String, String>. Is there any formal statement in document talked about this?

Related

Can you programmatically switch between existing Session Contexts

I have multiple existing Stateful Session Beans. I want to use a new library/framework that instantiates objects (outside the container manager). This framework does not conceptually have 'sessions'. I'm trying to develop a proof of concept to route calling of beans in different session contexts based on a map of session identifiers to session contexts.
Below are 2 handlers that are instantiated by the library. The first handler creates new BoundSessionContexts and maps them to generated keys. The second handler is suppose to be passed the key and based on it propagate to the appropriate corresponding sessionContext, lookup the beans in that context, and call those beans.
public class Handler1 implements Handler1Interface{
//**1**
WeldManager weldManager = (WeldManager) CDI.current().getBeanManager();
BoundSessionContext boundSessionCtx;
//**2**
Map<String, Object> sessionMap = new HashMap<>();
public Handler1(){
}
//**3**
#Override
public long start(){
long sessionId = new Random().nextLong();
//**4**
boundSessionCtx = weldManager.instance().select(BoundSessionContext.class, BoundLiteral.INSTANCE).get();
//**5**
//boundSessionCtx.associate(sessionMap);
//Make certain the sessionId isn't already in use.
while(SessionMapper.get(sessionId)!=null){
sessionId = new Random().nextLong();
}
//**6**
SessionMapper.put(sessionId, boundSessionCtx);
return sessionId;
}
//**7**
#Override
public void stop(long sessionId){
SessionMapper.remove(sessionId);
}
}
public class Handler2 implements Handler1Interface{
//**8**
#Inject
EJB1 ejb1;
//**9**
#Inject
EJB2 ejb2;
BeanManager beanManager;
BoundSessionContext boundSessionCxt;
//**10**
Map<String, Object> sessionMap = new HashMap<>();
public Handler2(){
}
#Override
public Object process(long sessionId, Object input){
lookupEJBs(sessionId);
//**11**
ejb1.method();
Object result = ejb2.method();
return result;
}
//**12**
private void lookupEJBs(long sessionId) {
boundSessionCxt = SessionMapper.get(sessionId);
boundSessionCxt.associate(sessionMap);
boundSessionCxt.activate();
beanManager = CDI.current().getBeanManager();
//**13**
TypeLiteral<EJB1> typeLiteral = new TypeLiteral<EJB1>() {};
Set<Bean<?>> beans = beanManager.getBeans(typeLiteral.getType());
Bean<?> bean = beanManager.resolve(beans);
ejb1 = bean.create(boundSessionCxt);
//**14**
TypeLiteral<EJB2> typeLiteral2 = new TypeLiteral<EJB2>() {};
beans = beanManager.getBeans(typeLiteral2.getType());
bean = beanManager.resolve(beans);
ejb2 = bean.create(boundSessionCxt);
}
I've never been able to call ejb2.method(). While I have used EJBs for many years this is my first attempt at manipulating contexts. I am definitively feeling lost. It's one thing to use the magic, it's another to warp that magic to your whim.
Questions (In no particular order):
A) Is what I'm trying to do 'reasonably' acheivable? Or is this a pipe dream? NOTE: I am currently using WELD-API 3.0.SP4, but I am not married to it.
B) I have never truly understood the reason for the map(10) that is associated with a context(12).
B1) What is it's purpose?
B2) How does it's use affect what I'm trying to do here?
B3) Am I correct in that I would want to associate and activate the context inside the object where I want to use the context beans?
C) Am I correct that #Inject (8 and 9) is pointless as the handler2 object is not instantiated/Injected by the bean manager.
Many thanks to anyone who can help me understand EJB/CDI session contexts better. :)

KafkaListenerContainerFactory not getting created properly

I have two listener container factories one for main topic and another for retry topic as given below
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> primaryKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setConcurrency(3);
factory.setAutoStartup(false);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
errorHandler.setAckAfterHandle(true);
factory.setErrorHandler(errorHandler);
return factory;
}
#Bean
public ConsumerFactory<String, Object> primaryConsumerFactory() {
Map<String, Object> map = new HashMap<>();
Properties consumerProperties = getConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "groupid");
consumerProperties.forEach((key, value) -> map.put((String) key, value));
ErrorHandlingDeserializer2<Object> errorHandlingDeserializer = new ErrorHandlingDeserializer2<>(
getSoapMessageConverter());
DefaultKafkaConsumerFactory<String, Object> consumerFactory = new DefaultKafkaConsumerFactory<>(map);
consumerFactory.setValueDeserializer(errorHandlingDeserializer);
return consumerFactory;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaRetryListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(retryConsumerFactory());
factory.setConcurrency(3);
factory.setAutoStartup(false);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new MyDeadLetterPublishingRecoverer("mytopic",
deadLetterKafkaTemplate()),
new FixedBackOff(5000, 2)));
return factory;
}
#Bean
public ConsumerFactory<String, Object> retryConsumerFactory() {
Map<String, Object> map = new HashMap<>();
Properties consumerProperties = getConsumerProperties();
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "retry.id");
consumerProperties.put("max.poll.interval.ms", "60000");
consumerProperties.forEach((key, value) -> map.put((String) key, value));
DefaultKafkaConsumerFactory<String, Object> retryConsumerFactory = new DefaultKafkaConsumerFactory<>(map);
retryConsumerFactory.setValueDeserializer(getCustomMessageConverter());
return retryConsumerFactory;
}
I have two separate listener classes which uses each of the aforementioned containers
There are two issues here
Spring complains about - Error creating bean with name 'kafkaListenerContainerFactory' defined Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.springframework.kafka.core.ConsumerFactory' available: expected at least 1 bean which qualifies as autowire candidate.
To Fix this I have to rename primaryKafkaListenerContainerFactory to kafkaListenerContainerFactory. Why this is so?
Second issue is kafkaRetryListenerContainerFactory is not seems to be taking whatever properties I try to set in retryConsumerFactory.(Especially "max.poll.interval.ms") instead it uses the properties set on primaryConsumerFactory in kafkaListenerContainerFactory
To Fix this I have to rename primaryKafkaListenerContainerFactory to kafkaListenerContainerFactory. Why this is so?
That is correct, kafkaListenerContainerFactory is the default name when no containerProperty is on the listener and Boot will try to auto-configure it.
You should name one of your custom factory with that name to override the Boot's auto configuration because you have an incompatible consumer factory.
Your second question makes no sense to me.
Perhaps your getConsumerProperties() is returning the same object each time - you need a copy.
When asking questions like this, it's best to show all the relevant code.

How to parse settings object in ShellViewModel (Caliburn.Micro)

I have a Dictionary Object defined as below
Dictionary<string, object> dictArguments = new Dictionary<string, object>();
dictArguments.Add("CommandLine", strCommandLineArguments);
And then I am passing it ShellViewModel as below.
DisplayRootViewFor<ShellViewModel>(dictArguments);
Whereas I am at a loss to figure out how and where ShellViewModel parses this argument because as far as Caliburn is concerned ShellViewModel has a single CTOR with ieventAggregator. Any pointers please?
Thanks,
Deepak
The parameter for DisplayRootViewFor accepts Windows Settings as dictionary. So for example,
Dictionary<string, object> dictArguments = new Dictionary<string, object>();
dictArguments.Add("Height", 1000);
dictArguments.Add("Width", 1500);
dictArguments.Add("ShowInTaskbar", false);
dictArguments.Add("WindowStartupLocation", WindowStartupLocation.CenterScreen);
DisplayRootViewFor<ShellViewModel>(dictArguments);
These settings would influence the Height,Width,ShowInTaskbar and WindowStartupLocation properties of your View (Caliburn Micro does that for, you do not need to do it manually).
I do not think this is useful for the storing CommandLine argument.

Spring Kafka bean return types

The documentation for spring kafka stream support shows something like:
#Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream = kStreamBuilder.stream("streamingTopic1");
// ... stream config
return stream;
}
However, I might want a topology dependent on multiple streams or tables. Can I do:
#Bean
public KStream<Integer, String> kStream(StreamsBuilder kStreamBuilder) {
KStream<Integer, String> stream1 = kStreamBuilder.stream("streamingTopic1");
KStream<Integer, String> stream2 = kStreamBuilder.stream("streamingTopic1");
// ... stream config
return stream;
}
In other words, is the bean returned relevant, or is it only important that kStreamBuilder is being mutated?
It depends.
If you don't need a reference to the KStream elsewhere, there is no need to define it as a bean at all you can auto wire the StreamsBuilder which is created by the factory bean.
If you need a reference, then each one must be its own bean.
For example, Spring Cloud Stream builds a partial stream which the application then modifies. See here.

JacksonMapping Polymorphic

I am trying to parse json structure https://developers.nest.com/documentation/api-reference
Where device could be various types i want jackson to instantiate relevant objects Thermostat/SmokeAlarm,Camera etc
#Data
#ToString
public class Nest {
#JsonProperty("metadata")
private Metadata metadata;
#JsonProperty("structures")
private HashMap<String, Structure> structures;
#JsonProperty("devices")
private HashMap<String, HashMap<String, Device>> devices;
}
How would i use#JsonTypeinfo to decided which type to instatntite based on values in keys.
Another question would be how woul i get rid of all these multilevel nestings and cal have something like
#JsonProperty("devices")
private List<Device> devices;
parsed according to keys/subtypes

Resources