Spring Kaka - Seek Offset from beginning - spring-kafka

I would like to consume messages from the beginning Offset. For this, I have added a property "seekToBeginning"=true in the properties file. My class that has the #KafkaListener implements ConsumerSeekAware and I have overriden the method onPartitionsAssigned() like the below. I would like to know if i'm doing it the right way. This method gets called 3 times (there are 3 partitions). Also, my worry is this method gets called when there is a CommitFailedException also. Pls let me know if the below if correct or should I filter by partition and how. Also pls let me know how to handle this in case of CommitFailedException.
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
if (seekToBeginning)
{
assignments.forEach(
(topic, action) -> callback.seekToBeginning(topic.topic(), topic.partition()));
}
}```

If you have concurrency = 3 then, yes, it will be called 3 times, once per consumer.
Since 2.3.4, there is a more convenient method:
/**
* Queue a seekToBeginning operation to the consumer for each
* {#link TopicPartition}. The seek will occur after any pending offset commits.
* The consumer must be currently assigned the specified partition(s).
* #param partitions the {#link TopicPartition}s.
* #since 2.3.4
*/
default void seekToBeginning(Collection<TopicPartition> partitions) {
You need a boolean field to only do the seeks on the initial assignment and not after a rebalance.
If you only have one consumer (concurrency = 1), it can be a simple boolean.
e.g. boolean initialSeeksDone.
With concurrency > 1, you need a ThreadLocal:
ThreadLocal<Boolean> initialSeeksDone;
then
if (this.initialSeeksDone.get() == null) {
//seek
this.initialSeeksDone.set(true);
}

Related

Which event to use to get notified if entity is written to DB (postFlush)?

I have an (Doctrine) entity in a Symfony4 project. What I'm looking for is something like a postFlush event (eg after it has been written to the DB).
I want to notify other systems that if a Customer is updated, I can dispatch a CustomerUpdated($customer->id) message unto my queues. I'm having difficulty finding the proper listener/eventhandler for this. The current problem is that the event is dispatched BEFORE the DB has written, so the consuming service asks info for an entry that doesn't exist yet (for like 2 seconds) or fetches old data as the DB isn't updated yet.
What I've tried:
class CustomerListener implements EntityListener {
public function getSubscribedEvents(): array {
return [ Events::postFlush ];
}
public function postFlush(){ die('looking for me?'); }
}
This does absolutly nothing and fails silently
I also use the Events::postUpdate event, which doesn't work for new entries (the data isn't flushed to the DB yet, resulting in old data).
I also use the Events::postPersist event for new items, which doesnt work because the data doesnt exists in the DB yet! (This is my current challenge).
The Doctrine docs aren't telling me anything useful either (or im not seeing it).
Off course I've tried researching this, but I cant seem to find anything.
What I'm looking for:
Entity gets created OR entity gets updated
Entity gets written to database
NOW DO SOMETHING HERE. I dont want to alter the entity anymore, just notify other systems AFTER save.
Both the postPersist and PostUpdate are between point 1&2, and not suitable. I am aware that flush isn't entity specific, but I need something.
It could be I'm using a listener/eventSubscriber incorrect, at this point i;m seeing the forest for the trees.
I had difficulty finding an answer, so I'll answer myself:
There isn't an 'postFlush' event on entity level. There is only a global one, e.g. you get EVERYTHING that is being flushed, all entities for create, update and delete statements.
I've made an EntityFlushListener, which I've stripped to a M.V.P. for an example:
/**
* This class listens to flush events in order to dispatch they're respective messages unto the queue.
* There is no specific 'postFlush' event for an entity, so this is a global listener for ALL entities.
*/
class EntityFlushListener implements EventSubscriber
{
private UnitOfWork $uow;
private array $storedInserts = [];
private array $storedUpdates = [];
private array $storedDeletes = [];
public function __construct(EntityManagerInterface $entityManager) {
$this->uow = $entityManager->getUnitOfWork();
}
public function getSubscribedEvents(): array
{
return [Events::onFlush, Events::postFlush];
}
/*
* We have to use the 'onFlush' event to get the entities to dispatch AFTER the flush, as 'postFlush' is unaware
* Please note: duplicates will occur. Logic to fix this might need to be implemented
*/
public function onFlush(OnFlushEventArgs $eventArgs): void
{
$this->storedInserts = $this->uow->getScheduledEntityInsertions();
$this->storedUpdates = $this->uow->getScheduledEntityUpdates();
$this->storedDeletes = $this->uow->getScheduledEntityDeletions();
}
/*
* It has now been written in the DB, do what you want with it
*/
public function postFlush(PostFlushEventArgs $eventArgs): void
{
dd(
$this->storedInserts,
$this->storedUpdates,
$this->storedDeletes,
);
}
}

Spring-kafka support for only executing SeekToTimestamp once

My use-case is to set consumer group offset based on timestamp.
For this I am using seekToTimestamp method of ConsumerSeekCallback inside onPartitionsAssigned() method of ConsumerSeekAware.
Now when I started my application it seeks to the timestamp I specified but during rebalancing, it seeks to that timestamp again.
I want this to happen only when if ConsumerGroup Offset is less than the offsets at that particular timestamp, if it's greater than that then it should not seek.
Is there a way we can achieve this or does Spring-Kafka provides some listeners for the new ConsumerGroup so when the new consumer group gets created it will invoke seek based on timestamp otherwise will use the existing offsets?
public class KafkaConsumer implements ConsumerSeekAware {
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
long timestamp = 1623775969;
callback.seekToTimestamp(new ArrayList<>(assignments.keySet()), timestamp);
}
}
Just add a boolean field (boolean seeksDone;) to your implementation; set it to true after seeking and only seek if it is false.
You have to decide, though, what to do if you only get partitions 1 and 3 on the first rebalance and 1, 2, 3, 4 on the next.
Not an issue if you only have one application instance, of course. But, if you need to seek each partition when it is first assigned, you'll have to track the state for each partition.

Vaadin - grid.getDataProvider().refreshAll(); Does not work after updating browser

how are you ? I have a detail with the push after updating the tab that contains my grid, I am using vaadin 8.0.4
,google chrome updated, and my example is based here https://github.com/vaadin/archetype-application-example
My application consists of data stored in mongodb, when I make a direct change in the db it is reflected in the grid every so often, 30 seconds, with push, it always works on a single tab, the problem appears when I update the tab or create a new one, the push seems to be disconnected and my grid is not updated anymore, the strange thing is that I added the #PreserveOnRefresh and in the first tab that accesses the application in that if the push works even after updating very strange.
This instance check changes in my db
private ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(5);
and
I use the grid update, with
grid.getDataProvider().refreshAll()
I even try to broadcast the tabs, by means of the pattern described in the book 12.16.4. Broadcasting to Other Users
because if the active notifications continue in all the tabs, but the grid does not, only in the original tab.
Update:
In this example application the problem is actually the login, when I remove it if everything works perfectly as it should be with push. But only when I remove the login
#Override
public void receiveBroadcast() {
access(() -> {
//Notification.show(message);
//grid.getDataProvider().refreshAll();
getNavigator().removeView(MonitorCrudView.VIEW_NAME);
getNavigator().addView(MonitorCrudView.VIEW_NAME,new MonitorCrudView(this));
getNavigator().navigateTo(MonitorCrudView.VIEW_NAME);
Notification.show("Grid updated", Notification.Type.TRAY_NOTIFICATION);
});
}
The detail is that when I have the AccessContro enabled, and I enter as admin by what you see, when executing the above method I get an exception of type "No request linked to the current thread"; Coming from the "CurrentUser" Class
https://github.com/vaadin/archetype-application-example/blob/master/mockapp-ui/src/main/java/org/vaadin/mockapp/samples/authentication/CurrentUser.java
But here in vaadin 8.0.4 changes a little
public final class CurrentUser {
/**
* The attribute key used to store the username in the session.
*/
public static final String CURRENT_USER_SESSION_ATTRIBUTE_KEY = CurrentUser.class
.getCanonicalName();
private CurrentUser() {
}
/**
* Returns the name of the current user stored in the current session, or an
* empty string if no user name is stored.
*
* #throws IllegalStateException
* if the current session cannot be accessed.
*/
public static String get() {
String currentUser = (String) getCurrentRequest().getWrappedSession()
.getAttribute(CURRENT_USER_SESSION_ATTRIBUTE_KEY);
if (currentUser == null) {
return "";
} else {
return currentUser;
}
}
/**
* Sets the name of the current user and stores it in the current session.
* Using a {#code null} username will remove the username from the session.
*
* #throws IllegalStateException
* if the current session cannot be accessed.
*/
public static void set(String currentUser) {
if (currentUser == null) {
getCurrentRequest().getWrappedSession().removeAttribute(
CURRENT_USER_SESSION_ATTRIBUTE_KEY);
} else {
getCurrentRequest().getWrappedSession().setAttribute(
CURRENT_USER_SESSION_ATTRIBUTE_KEY, currentUser);
}
}
private static VaadinRequest getCurrentRequest() {
VaadinRequest request = VaadinService.getCurrentRequest();
if (request == null) {
throw new IllegalStateException(
"No request bound to current thread");
}
return request;
}
}
UPDATE:
Https://github.com/rucko24/testView/blob/master/MyApp-ui/src/main/java/example/samples/crud/SampleCrudView.java
In this class I added the button that broadcast to all UI.
Log in as admin
Click on Update grid
Should give a type exception
java.util.concurrent.ExecutionException:
java.lang.IllegalStateException: No request bound to current thread
Does not continue to throw exeption after refreshing the UI
But when opening in an incognito tab it always throws the exception once before updating.
With the base project and mongo db plus the
private static ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool (5);
I always get the same exception from above, and never change the view with push
Disclaimer: This is better suited as a comment but it does not fit the allocated space.
The VaadinService.getCurrentRequest() API doc states that:
The current response can not be used in e.g. background threads because of the way server implementations reuse response instances.
At the same time, the UI.access() javadoc is somewhat ambiguous stating that:
Please note that the runnable might be invoked on a different thread or later on the current thread, which means that custom thread locals might not have the expected values when the command is executed
The above statements kind of explain why VaadinService.getCurrentRequest() is null in your getCurrentRequest() method.
Nonetheless, it seems that UI.getCurrent() returns an instance when running in that background thread, also suggested by this vaadin forum post and vaadin book:
Your code is not thread safe as it does not lock the VaadinSession before accessing the UI. The preferred pattern is using the UI.access and VaadinSession.access methods as described in Book of Vaadin section 11.16.3. Inside an access block Vaadin automatically sets the relevant threadlocals in addition to properly handling session locking.
In conclusion, i'd suggest to replace all the calls to getCurrentRequest().getWrappedSession() with UI.getCurrent().getSession();, eg:
UI.getCurrent().getSession().getAttribute(CURRENT_USER_SESSI‌​ON_ATTRIBUTE_KEY);
or
UI.getCurrent().getSession().setAttribute(CURRENT_USER_SESSI‌​ON_ATTRIBUTE_KEY, currentUser);
I tested this with your sample and it worked fine.

testng - HTTP REST testing behind Login

I have setup a project for testing HTTP REST application using testNG / Maven / Springs RestTemplate.
I use it to do functional testing, multiple calls to the REST application are contained within suites to mimic user processes.
This is working fine.
Know we have turned on authentication.
Question is how to do this with testNG? How can i (only once) login for my test suite.
I can use a #BeforeSuite and call the loginpage, login and catch the cookie needed for all other requests. But where do i store this cookie so all test cases can add it?
I propably have to add some code to the tests to add the cookie of course....but how do i get hold of that?
I looked into #parameter and #dataprovider, but these seem not help me much...
Any help/suggestion is much appreciated.
I have created a workable solution.
What I have done is worked with a singleton object and with the #dataprovider, to get the data to the test:
The dataprovider creates a singleton object.
The singleton object calls the login page in its creation and will after every call from the different tests return the cookie information.
Maybe it is a bit of a hack, but it works.
The Singleton solution is somewhat heavy-handed as it prevents any parallelization of tests in the future.
There are some ways to solve this problem. One is to pass a ITestContext instance to your #BeforeSuite/#BeforeTest and #BeforeClass configuration methods and put/get the parameters via the test context in every instance:
public class Test {
/** Property Foo is set once per Suite */
protected String foo;
/** Property Foo is set once per Test */
protected String bar;
/**
* As this method is executed only once for all inheriting instances before the test suite starts this method puts
* any configuration/resources needed by test implementations into the test context.
*
* #param context test context for storing test conf data
*/
#BeforeSuite
public void beforeSuite(ITestContext context) {
context.setAttribute("foo", "I was set in #BeforeSuite");
}
/**
* As this method is executed only once for all inheriting instances before the test starts this method puts any
* configuration/resources needed by test implementations into the test context.
*
* #param context test context for storing test conf data
*/
#BeforeTest(alwaysRun = true)
public void beforeTest(ITestContext context) {
context.setAttribute("bar", "I was set in #BeforeTest");
}
/**
* This method is run before the first method of a test instance is started and gets all required configuration from
* the test context.
*
* #param context test context to retrieve conf data from.
*/
#BeforeClass
public void beforeClass(ITestContext context) {
foo = (String) context.getAttribute("foo");
bar = (String) context.getAttribute("bar");
}
}
This solution works even if the #BeforeSuite/Test/Class methods are in a superclass of the actual test implementation.
If you are delegating the login on Spring Security and your backend does not store state (means that only authorizes isolated requests) then you do not need to test it. This means that you can disable authentication (cookie obtaining) in your tests. This way you decouple the test itself from the authorization.
But if you do not want to do this. And If you organise your tests in suites you can set a private member. The cookie will be the header auth in the response.
#TestSuite
public void mySuite {
private String cookie;
#BeforeSuite public void login() {
// Obtain cookie
this.cookie = cookie;
}
////// Rest of suite
Another way to look at it is to execute login as a part of your test.
I do not know any other way more elegant of do it.

How do I set query cache on a call issued by the seam engine

#In
Identity identity;
Boolean newValue = identity.hasPermission(target, action);
Any call to the above method also does a "select role from Role r" call, which is called from the underlying seam engine. How do I set the query cache for this call as a query hint (e.g. org.hibernate.cacheable flag) so that it doesn't get called again.
Note: Role information is never bound to change, hence I view this as a unnecessary sql call.
I am not in hibernate, but as this question is still unanswered: we extended the standard Identity class of seam for several reasons. You might want to extend it as well to help you caching the results.
As this cache is session scoped, it will have the possible benefit that it will be reloaded when the user logs on/off again - but this depends on your requirements.
Best regards,
Alexander.
/**
* Extended Identity to implement i.e. caching
*/
#Name("org.jboss.seam.security.identity")
#Scope(SESSION)
#Install(precedence = Install.APPLICATION)
#BypassInterceptors
#Startup
public class MyIdentity extends Identity {
// place a concurrent hash map here
#Override
public boolean hasPermission(Object name, String action) {
// either use the use the cached result in the hash map ...
// ... or call super.hasPermission() and cache the result
}
}

Resources