How do I write a subscribable query involving relations in Flutter ObjectBox? - flutter-objectbox

I am continuously receiving updates from a server and placing them in my ObjectBox database. My app is meant to visualize this data. The data involves runners, competition classes, etc.
The effect I want to achieve is that when there is an update to a class of runners, the widget visualizing this class (competition class - not programming class) knows it's time to request data and redraw. It seems to me that this might be accomplished by listening to a query that filters runners by class.
#Entity()
class Runner {
int id;
String name;
final runnerClass = ToOne<Class>();
// constructors, other fields and methods omitted
}
#Entity()
class Class {
int id;
String name;
#Backlink("runnerClass")
final runners = ToMany<Runner>();
// constructors, other fields and methods omitted
}
// in persistence class
late Stream<Query<Runner>> watchedRunners;
// in persistence class, method
QueryBuilder<Runner> query = runnerBox.query();
query.link(Runner_.runnerClass, Class_.name.equals("Easy"));
watchedRunners = query.watch();
// in listening class
final sub = db.watchedRunners.listen((Query<Runner> query) {
print("THINGS BE HAPPENING");
});
I then test this by changing one runner on the server. I change their class from one that is not Easy to another that is not Easy. In other words, the runners in the Easy class receive no additions, removals or modifications, and in the XML update from the server, I can see as much. Yet this listener fires, I'm guessing because there was a change to runners at all.
If I print the results of the query in the listening class, it says everyone is in the Easy class; if I print the length it constantly says 31 (whereas there are 82 runners in total). If I actually move runners in/out of the Easy class, the number updates to reflect it, and the list of names checks out. So it seems to me the query is correct, but the listener is not.

Related

Starting multiple Kafka listeners in Spring Kafka?

One of our dev teams is doing something I've never seen before.
First they're defining an abstract class for their consumers.
public abstract class KafkaConsumerListener {
protected void processMessage(String xmlString) {
}
}
Then they use 10 classes like the one below to create 10 individual consumers.
#Component
public class <YouNameIt>Consumer extends KafkaConsumerListener {
private static final String <YouNameIt> = "<YouNameIt>";
#KafkaListener(topics = "${my-configuration.topicname}",
groupId = "${my-configuration.topicname.group-id}",
containerFactory = <YouNameIt>)
public void listenToStuff(#Payload String message) {
processMessage(message);
}
}
So with this they're trying to start 10 Kafka listeners (one class/object per listener). Each listener should have own consumer group (with own name) and consume from one (but different) topic.
They seem to use different ConcurrentKafkaListenerContainerFactories, each with #Bean annotation so they can assign different groupId to each container factory.
Is something like that supported by Spring Kafka?
It seems that it worked until few days ago and now it seems that one consumer group gets stuck all the time. It starts, reads few records and then it hangs, the consumer lag is getting bigger and bigger
Any ideas?
Yes, it is supported, but it's not necessary to create multiple factories just to change the group id - the groupId property on the annotation overrides the factory property.
Problems like the one you describe is most likely the consumer thread is "stuck" in user code someplace; take a thread dump to see what the thread is doing.

Using provider pattern with database in flutter

I am new to flutter and especially provider pattern. I went through the documentation, examples and some tutorials for the provider pattern statemanagement and i am trying to implement it in one of my projects. I am developing a personal expenses manager app. The problem is that all the documentation, examples and tutorials assume the model provider has all the required data available in memory, that is, all the mock data is already there in a list. It is fairly easy to understand the addition/deletion of data and the change notification. But in a real app, the data needs to be loaded either from the local database or from the Internet. That is when things gets confusing and messy. Here is my model:
class Expense {
int id;
String name;
DateTime date;
double amount;
PaymentType paymentType; //This is an enum (card, cash etc.)
ExpenseCategory category; //Categories like fuel, groceries etc.
Expense({#required this.name, #required this.date, #required this.amount, #required this.paymentType, #required this.category});
}
class ExpenseCategory {
int id;
String name;
ExpenseCategory({#required this.name});
}
Data manipulation class:
class ExpenseRepository {
static Future<List<Expense>> getAllExpenses({#required int month}) async {
return await mockService.getAllExpensesData(month: month);
}
static Future<List<Expense>> getRecentExpenses() async {
return await mockService.getRecentExpensesData();
}
static Future<List<CategorywiseAmount>> getCategorywiseExpensesList({#required int month}) async {
return await mockService.getCategorywiseExpensesListData(month: month);
}
}
The data is, for the time being, loaded from a mock service which will be replaced by the local database. Nevertheless, it simulates the async/await pattern.
Keeping the above code in view, i have the following questions:
Do i have to convert the "Expense" model into a provider ("Expense" and "Expenses" provider) or do i have to create a separate class which will act as a provider? In the docs and tutorials, i have seen that the models have been converted to providers but is that the right thing to do? I may be wrong but i think the separation of concerns will be violated. As far as i have read, a model should not do anything else other than being a model.
How does the expense provider actually load data from the database (or from the mock database in my case)? If i follow the tutorials, i have to have a provider like this:
class ExpensesProvider with ChangeNotifier {
List<Expense> _expensesList;
List<Expense> get recentExpenses {
return _expensesList;
}
}
But how will the provider load data from mock database into the _expensesList property because the mock method getRecentExpenses() returns a future (and the real one will too) and that can't be used in the getter. Or do i have to return a future from the getter itself too?
If a new expense is added, the list of recent expenses should update automatically. Let's assume for the time being this provider is somehow hooked to the database. I have the following doubts:
a) Does the provider watch for the changes in the database or does it watch for changes in the in-memory model/list and trigger the rebuilds automatically?
b) Or maybe it doesn't watch any of above and we need to manually trigger it with notifyListeners?
The confusion comes from the comments
// Consumer looks for an ancestor Provider widget
// and retrieves its model (Counter, in this case).
// Then it uses that model to build widgets, and will trigger
// rebuilds if the model is updated.
These comments are from the flutter samples app https://github.com/flutter/samples/blob/master/provider_counter/lib/main.dart
Am i even using the right tools? I mean maybe there is something else which can be used instead of provider (but with similar or more feature) for use with local/database to make the process simpler.

EF Core Update with List

To make updates to a record of SQL Server using Entity Framework Core, I query the record I need to update, make changes to the object and then call .SaveChanges(). This works nice and clean.
For example:
var emp = _context.Employee.FirstOrDefault(item => item.IdEmployee == Data.IdEmployee);
emp.IdPosition = Data.IdPosition;
await _context.SaveChangesAsync();
But is there a standard method if I want to update multiple records?
My first approach was using a list passing it to the controller, but then I would need to go through that list and save changes every time, never really finished this option as I regarded it as not optimal.
For now what I do is instead of passing a list to the controller, I pass each object to the controller using a for. (kind of the same...)
for(int i = 0; i < ObjectList.Count; i ++)
{
/* Some code */
var httpResponseObject = await MyRepositories.Post<Object>(url+"/Controller", Object);
}
And then do the same thing on the controller as before, when updating only one record, for each of the records...
I don't feel this is the best possible approach, but I haven't found another way, yet.
What would be the optimal way of doing this?
Your question has nothing to do with Blazor... However, I'm not sure I understand what is the issue. When you call the SaveChangesAsync method, all changes in your context are committed to the database. You don't have to pass one object at a time...You can pass a list of objects
Hope this helps...
Updating records in bulk using Entity Framework or other Object Relational Mapping (ORM) libraries is a common challenge because they will run an UPDATE command for every record. You could try using Entity Framework Plus, which is an extension to do bulk updates.
If updating multiple records with a single call is critical for you, I would recommend just writing a stored procedure and call if from your service. Entity Framework can also run direct queries and stored procedures.
It looks like the user makes some changes and then a save action needs to persist multiple records at the same time. You could trigger multiple AJAX calls—or, if you need, just one.
What I would do is create an endpoint—with an API controller and an action—that's specific to your needs. For example, to update the position of records in a table:
Controller:
/DataOrder
Action:
[HttpPut]
public async void Update([FromBody] DataChanges changes)
{
foreach(var change in changes)
{
var dbRecord = _context.Employees.Find(change.RecordId);
dbRecord.IdPosition = change.Position;
}
_context.SaveChanges();
}
public class DataChanges
{
public List<DataChange> Items {get;set;}
public DataChangesWrapper()
{
Items = new List<DataChange>();
}
}
public class DataChange
{
public int RecordId {get;set;}
public int Position {get;set;}
}
The foreach statement will execute an UPDATE for every record. If you want a single database call, however, you can write a SQL query or have a stored procedure in the database and pass the data as a DataTable parameter instead.

Axon Sagas duplicates events in event store when replaying events to new DB

we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}

Java reflection to set static final field fails after previous reflection

In Java, it turns out that field accessors get cached, and using accessors has side-effects. For example:
class A {
private static final int FOO = 5;
}
Field f = A.class.getDeclaredField("FOO");
f.setAccessible(true);
f.getInt(null); // succeeds
Field mf = Field.class.getDeclaredField("modifiers" );
mf.setAccessible(true);
f = A.class.getDeclaredField("FOO");
f.setAccessible(true);
mf.setInt(f, f.getModifiers() & ~Modifier.FINAL);
f.setInt(null, 6); // fails
whereas
class A {
private static final int FOO = 5;
}
Field mf = Field.class.getDeclaredField("modifiers" );
mf.setAccessible(true);
f = A.class.getDeclaredField("FOO");
f.setAccessible(true);
mf.setInt(f, f.getModifiers() & ~Modifier.FINAL);
f.setInt(null, 6); // succeeds
Here's the relevant bit of the stack trace for the failure:
java.lang.IllegalAccessException: Can not set static final int field A.FOO to (int)6
at sun.reflect.UnsafeFieldAccessorImpl.throwFinalFieldIllegalAccessException(UnsafeFieldAccessorImpl.java:76)
at sun.reflect.UnsafeFieldAccessorImpl.throwFinalFieldIllegalAccessException(UnsafeFieldAccessorImpl.java:100)
at sun.reflect.UnsafeQualifiedStaticIntegerFieldAccessorImpl.setInt(UnsafeQualifiedStaticIntegerFieldAccessorImpl.java:129)
at java.lang.reflect.Field.setInt(Field.java:949)
These two reflective accesses are of course happening in very different parts of my code base, and I don't really want to change the first to fix the second. Is there any way to change the second reflective access to ensure it succeeds in both cases?
I tried looking at the Field object, and it doesn't have any methods that seem like they would help. In the debugger, I noticed overrideFieldAccessor is set on the second Field returned in the first example and doesn't see the changes to the modifiers. I'm not sure what to do about it, though.
If it makes a difference, I'm using openjdk-8.
If you want the modifier hack (don't forget it is a total hack) to work, you need to change the modifiers private field before the first time you access the field.
So, before you do f.getInt(null);, you need to do:
mf.setInt(f, f.getModifiers() & ~Modifier.FINAL);
The reason is that only one internal FieldAccessor object is created for each field of a class (*), no matter how many different actual java.lang.reflect.Field objects you have. And the check for the final modifier is done once when it constructs the FieldAccessor implementation in the UnsafeFieldAccessorFactory.
When it is determined you can't access final static fields (because, the setAccessible override doesn't works but non-static final fields, but not for static final fields), it will keep failing for every subsequent reflection, even through a different Field object, because it keeps using the same FieldAccessor.
(*) barring synchronization issues; as the source code for Field mentions in a comment:
// NOTE that there is no synchronization used here. It is correct
(though not efficient) to generate more than one FieldAccessor for a
given Field.

Resources