Distribution of topic partitions by concurrentMessageListenerContainer - spring-kafka

i'm setting up a ConcurrentMessageListenerContainer
<bean class="org.springframework.kafka.listener.ConcurrentMessageListenerContainer" id="messageListenerContainer">
<constructor-arg index="0" ref="consumerFactory"/>
<constructor-arg index="1" ref="containerProperties"/>
<property name="concurrency" value="2"/>
</bean>
ConsumerFactory use this config:
<util:map id="consumerConfig" map-class="java.util.HashMap">
<entry key="#{T(org.apache.kafka.clients.consumer.ConsumerConfig).BOOTSTRAP_SERVERS_CONFIG}"
value="${rp.kafka.bootstrap.servers}"/>
<entry key="#{T(org.apache.kafka.clients.consumer.ConsumerConfig).KEY_DESERIALIZER_CLASS_CONFIG}"
value="org.apache.kafka.common.serialization.StringDeserializer"/>
<entry key="#{T(org.apache.kafka.clients.consumer.ConsumerConfig).VALUE_DESERIALIZER_CLASS_CONFIG}"
value="org.springframework.kafka.support.serializer.JsonDeserializer"/>
<entry key="#{T(org.springframework.kafka.support.serializer.JsonDeserializer).TRUSTED_PACKAGES}"
value="*"/>
<entry key="#{T(org.apache.kafka.clients.consumer.ConsumerConfig).PARTITION_ASSIGNMENT_STRATEGY_CONFIG}"
value="org.apache.kafka.clients.consumer.RoundRobinAssignor"/>
<entry key="#{T(org.apache.kafka.clients.consumer.ConsumerConfig).ENABLE_AUTO_COMMIT_CONFIG}"
value="false"/>
</util:map>
and ContainerProperties are
<bean class="org.springframework.kafka.listener.ContainerProperties" id="containerProperties">
<constructor-arg>
<list>
<value>sendSMS</value>
</list>
</constructor-arg>
<property name="groupId" value="main"/>
<property name="messageListener" ref="messageListener"/>
<property name="ackMode" value="RECORD"/>
</bean>
My topic "sendSMS" has 5 partitions on 3-noded cluster with rep factor of 3, so i expect that each KafkaMessageListenerContainer created by Concurrent one (total 2 in that case) will take it's portion of partitions to handle. Hovewer, after an application is started i see in my debugger window that each listener is handling all 5! partitions
https://gyazo.com/183626ff60061b471858f8cc52573353
and message from 4-th partition (its where i have a message that hangs the processing and not being commited after restarts, but its not related to this issue) on the same offset is being delivered 2 times in different threads with different consumers! Why it happens so? Is it a bug or expected behavior?

You are not showing enough information. The concurrent container aggregates the assigned partitions for the child KafkaListenerContainers (one for each concurrency).
#Override
public Collection<TopicPartition> getAssignedPartitions() {
return this.containers.stream()
.map(KafkaMessageListenerContainer::getAssignedPartitions)
.filter(Objects::nonNull)
.flatMap(Collection::stream)
.collect(Collectors.toList());
}
You need to show logs for the re-delivery; turn on DEBUG logging for more information.

Related

org.hibernate.validator.constraints not picking reloaded messages

I am trying to use Spring's ReloadableResourceBundleMessageSource for LocalValidatorFactoryBean so that when I update an error message it should reflect without requiring the server to be restarted. I am using Spring 4.1.4, hibernate-validator 4.3.2.Final.
Below are the code details -
context.xml -
<mvc:annotation-driven validator="validator" />
<bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource">
<property name="basenames">
<list>
<value>file:../conf/fileapplication</value> <!-- Messages here will override the below properties file-->
<value>/WEB-INF/application</value>
</list>
</property>
<property name="cacheSeconds" value="10"></property> <!-- Will check for refresh every 10 seconds -->
</bean>
<bean name="validator"
class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean">
<property name="validationMessageSource">
<ref bean="messageSource"/>
</property>
</bean>
Model -
import org.hibernate.validator.constraints.NotBlank;
public class InputForm {
#NotBlank ( message = "{required.string.blank}")
String requiredString;
Controller -
#RequestMapping(value = "/check/string", method = RequestMethod.POST)
public String checkString(
#ModelAttribute("formModel") #Valid InputForm inputForm ,
BindingResult result, Model model, HttpServletResponse response,
HttpServletRequest request) {
if (result.hasErrors()) {
model.addAttribute("formModel", inputForm);
return "userInput";
}
// Do some backend validation with String
result.reject("string.not.valid",
"String is Invalid");
model.addAttribute("formModel", inputForm);
return "userInput";
}
application.properties (in /WEB_INF/ folder)
required.string.blank=Please enter the required string.
string.not.valid=Please enter a valid string.
fileapplication.properties (in /conf/ folder. Will override above file)
required.string.blank=You did not enter the required string. #Does not reflect when I change here
string.not.valid=You did not enter a valid string. #Reflects when I change here
Now the problem I am facing is, when I update "string.not.valid" in fileapplication.properties it reflects at runtime and I see the updated message. But when I update "required.string.blank" in fileapplication.properties it does not reflect at runtime.
Note that the overriding part is working fine for both messages upon application start up. But the "reloading" part is not working fine for "required.string.blank".
This is what I figured out based on my research - We need to create our own MessageInterpolator and add it as dependency to the validator instead of message source. Because when we add a messageSource as dependency, it is cached by default by the validator and any message reloads spring does won't take effect in the validator's cached instance of messageSource.
Below are the details:
In context.xml, add the custom MessageInterpolator as dependency to LocalValidatorFactoryBean instead of messageSource:
<mvc:annotation-driven validator="validator" />
<bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource">
<property name="basenames">
<list>
<value>file:../conf/fileapplication</value> <!-- Messages here will override the below properties file-->
<value>/WEB-INF/application</value>
</list>
</property>
<property name="cacheSeconds" value="10"></property> <!-- Will check for refresh every 10 seconds -->
</bean>
<bean name="validator" class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean">
<property name="messageInterpolator">
<ref bean="messageInterpolator"/>
</property>
</bean>
<bean name="messageInterpolator"
class="com.my.org.support.MyCustomResourceBundleMessageInterpolator">
<constructor-arg ref="messageSource" />
</bean>
Create your custom MessageInterpolator by extending Hibernate's org.hibernate.validator.messageinterpolation.ResourceBundleMessageInterpolator.
public class MyCustomResourceBundleMessageInterpolator extends
ResourceBundleMessageInterpolator {
public MyCustomResourceBundleMessageInterpolator(MessageSource messageSource)
{
// Passing false for the second argument
// in the super() constructor avoids the messages being cached.
super(new MessageSourceResourceBundleLocator(messageSource), false);
}
}
Model, Controller and properties file can be same as in the question.

How to open new SQLite database in runtime with Spring-JDBC?

For my desktop application I need to change SQLite database connection in runtime (actually it is just "Open file" action).
Datasource is configured using Spring-JDBC.
1) Extend SingleConnectionDataSource to be able to change JDBC URL:
public class ChangeableSingleConnectionDataSource extends SingleConnectionDataSource {
public void updateUrl(String filePath) {
setUrl("jdbc:sqlite:" + filePath);
resetConnection();
}
}
2) Define dataSource in context.xml
<bean id="dataSource" class="package.ChangeableSingleConnectionDataSource" destroy-method="destroy">
<property name="driverClassName" value="org.sqlite.JDBC"/>
<property name="url" value="jdbc:sqlite:"/>
<property name="suppressClose" value="true"/>
</bean>
3) When you need to open a new database file just call
dataSource.updateUrl(newFile.getAbsolutePath());

Index error on querying google datastore using gcloud

I am trying to use google datastore for my non GAE application.
For that i have created kinds and ancestor related entities in datastore using gcloud python library.
Also updated datastore index configuration for all the kinds using gcd tool via WEB-INF/datastore-indexes.xml file and its status' are serving.
However i can not successfully query the index based columns either in console or using gcloud lib.
Here is the query & traceback
from gcloud import datastore
ds = datastore.Client(dataset_id='XXXXXX')
query = datastore.Query(ds, kind='event')
query.add_filter('EvtName', '=', 'buy')
query.add_filter('EventDateTime', '<=', datetime.datetime(2015, 10, 22, 8, 45))
for itm in query.fetch():
print(dict(itm))
gcloud.exceptions.PreconditionFailed: 412 no matching index found.
here is my datastore-indexes.xml config
<?xml version="1.0" encoding="utf-8"?>
<datastore-indexes
autoGenerate="false">
<datastore-index kind="event" ancestor="true">
<property name="EvtName" direction="desc" />
<property name="EventDateTime" direction="desc" />
</datastore-index>
<datastore-index kind="att" ancestor="true">
<property name="EvtAttName" direction="desc" />
<property name="EventDateTime" direction="desc" />
</datastore-index>
<datastore-index kind="att_val" ancestor="true">
<property name="AttValue" direction="desc" />
<property name="EventDateTime" direction="desc" />
</datastore-index>
<datastore-index kind="user" ancestor="true">
<property name="EventDateTime" direction="desc" />
</datastore-index>
</datastore-indexes>
am i missing something?
All of your indexes are designed to be used with ancestor queries (note the ancestor=true). However, your actual query does not query within a specific ancestor.
In order to answer your specific query, you need the index:
<datastore-index kind="event" ancestor="false">
<property name="EvtName" direction="desc" />
<property name="EventDateTime" direction="desc" />
</datastore-index>
Or, if you actually do want to query for entities with a specific parent, make sure to add an ancestor filter with Query#hasAncestor(Key parentKey).

Spring: how to get the app's directory path during a bean construction time?

I have a bean as follows:
<bean id="myBean" class="MyBeanClass">
<constructor-arg value="\WEB-INF\myfile.dat"/>
</bean>
In the bean's contructor, I need to build the file's full path. To do that, I have to first find the app's root path first.
Thanks and regards.
Update
Per Michael-O's suggestion, here is my solution (so easy).
Spring bean:
<bean id="myBean" class="MyBeanClass">
<constructor-arg value="/myfile.dat"/> <!--under WEB-INF/classes-->
</bean>
Java:
public MyBeanClass(String path) throws Exception {
ClassPathResource file = new ClassPathResource(path);
lookup = new LookupService(file.getFile().getPath(), LookupService.GEOIP_MEMORY_CACHE);
}
Michael, thanks!!!
Use Spring's Resource class in your bean and spring will do the rest for you.
After seeing #curious1's edit, there is a better solution to his answer. Please do not use that. Go with this one:
beans.xml:
<!-- START: Improvement 2 -->
<context:annotation-config />
<bean id="service" class="LookupService">
<constructor-arg value="classpath:/myfile.dat"/> <!--under WEB-INF/classes-->
<constructor-arg>
<util:constant static-field="LookupService.GEOIP_MEMORY_CACHE"/>
</constructor-arg>
</bean>
<!-- END: Improvement 2 -->
<!-- Spring autowires here -->
<bean id="myBean" class="MyBeanClass" />
<!-- START: Improvement 1 -->
<bean id="myBean" class="MyBeanClass" />
<constructor-arg value="classpath:/myfile.dat"/> <!--under WEB-INF/classes-->
</bean>
<!-- END: Improvement 1 -->
Java:
public MyBeanClass(Resource path) throws Exception {
lookup = new LookupService(path.getInputStream(), LookupService.GEOIP_MEMORY_CACHE);
}
This is source-agnostic, does not rely on files and is the Spring way.
Edit 2: Rethinking my code, it can be even better:
public class MyBeanClass {
#Autowired
LookupService service;
}
and configure LookupService in your beans.xml.
Maybe you should consider using:
getClass().getClassLoader().getResourceAsStream()
inside constructor. This will use your classpath, so you "WEB-INF\myfile.dat", will be visible. Next think is use resource directory to put all resources in one directory (default: under root directory in WAR file)

runscript executed multiple times for #Autowired jdbcTemplate and h2 in-memory database

I've inherited a project and am trying to get a set of integration tests running against an in-memory h2 database. In order for them to pass some tables, relationships and reference data needs creating.
I can see the problem in that the script referenced in RUNSCRIPT is being executed multiple times and therefore generating Index "XXX_IDX" already exists errors and other violations. So is there a way to force the script to only be run once or do I need a external database? It seems that the script is run on every connection which I assume is by design.
properties file
my.datasource.url=jdbc:h2:mem:my_db;DB_CLOSE_DELAY=-1;MODE=Oracle;MVCC=TRUE;INIT=RUNSCRIPT FROM 'classpath:/create-tables-and-ref-data.sql'
XML config
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="url" value="${my.datasource.url}"/>
<!-- other properties for username, password etc... -->
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="myDataSource"/>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="myDataSource"/>
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
many Java classes in the following pattern
#Component
public class SomethingDAOImpl implements SomethingDAO {
#Autowired
public SomethingDAOImpl(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
}
#Component
public class SomethingElseDAOImpl implements SomethingElseDAO {
#Autowired
public SomethingElseDAOImpl(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
}
With the default bean scope being singleton I thought this would just work, but I guess i'm missing something. Also, if I switch to an real Oracle instance that already has the tables and reference data set-up, the tests all pass.
In many cases, it is possible to write the SQL script so that no exceptions are thrown:
create table if not exists test(id int, name varchar(255));
create index if not exists test_idx on test(name);
I ended up using an alternative approach, as I could not write the SQL in a way that could be reapplied without error.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd">
<jdbc:initialize-database data-source="myDataSource" enabled="true" ignore-failures="ALL">
<jdbc:script location="classpath:create-and-alter-tables-first-then-add-test-data.sql" />
</jdbc:initialize-database>
</beans>
which is executed once at context initialization.
Note: other namespaces and beans omitted for brevity.

Resources