Axon Sagas duplicates events in event store when replaying events to new DB - axon

we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}

I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!

Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}

Related

Add Quartz Job&Trigger to running Razor Pages application

I have a razor pages app that implements Quartz.NET to store jobs in a MySQL Database. The implementation works fine so far, I can connect to the db, store jobs and they are executed at the specified times. The issue I'm having currently is that I need to schedule and execute jobs based on user inputs(without restarting the app) and that I can't get to work. I'm very new to Quartz&Asp.net and I haven't been coding for very long either, so apologies if I've made any stupid mistakes.
I've read somewhere that I shouldn't initialize multiple schedulers so I've tried storing the scheduler object I've got so I can access and use it later. However when I try to access it from another class later then I get a Null reference exception. Tbh, this feels like it shouldn't even work so I'm not surprised it doesn't...can anyone please look at my code below and tell me if this can work? Or is there a better way to do this?
I've found one other solution where they basically create a job on startup that periodically checks a db for new jobs and adds them to the scheduler. I guess that would work, seems a bit clunky, though. Plus it's from 10 years ago so maybe there's a better way today? How to add job with trigger for running Quartz.NET scheduler instance without restarting server?
One other idea I've had was to open(and close) a new app whenever I need to create a job. I'm not sure I like that idea but seems less resource intensive than the recurring job described above. Would that be a viable option?
The code for my current solution:
Scheduler:
//Creating Scheduler
Scheduler = await schedulerFactory.GetScheduler();
Scheduler.JobFactory = jobFactory;
var key = new JobKey("Notify Job", "DEFAULT");
if (key == null)
{
//Create Job
IJobDetail jobDetail = CreateJob(jobMetaData);
//Create Trigger
ITrigger trigger = CreateTrigger(jobMetaData);
//Schedule Job
//await Scheduler.ScheduleJob(jobDetail, trigger, cancellationToken);
await Scheduler.AddJob(jobDetail, true);
}
//Start Scheduler
await Scheduler.Start(cancellationToken);
//Copying the scheduler object into a different class where it's easier to access.
ScheduleStore scheduleStore = new ScheduleStore();
scheduleStore.tempScheduler = Scheduler;
ScheduleStore:
public class ScheduleStore
{
public IScheduler tempScheduler { get; set; }
public ScheduleStore()
{
}
}
runtime Scheduler:
public class RunningScheduler : IHostedService {
public IScheduler scheduler { get; set; }
private readonly JobMetadata jobMetaData;
public RunningScheduler(JobMetadata job)
{
ScheduleStore scheduleStore = new ScheduleStore();
this.scheduler = scheduleStore.tempScheduler;
this.jobMetaData = job;
}
public async Task StartAsync(CancellationToken cancellationToken)
{
IJobDetail jdets = CreateJob(jobMetaData);
if (jobMetaData.CronExpression == "--")
{
ITrigger jtriggz = CreateSimpleTrigger(jobMetaData);
//the next line throws the exception.
await scheduler.ScheduleJob(jdets, jtriggz, cancellationToken);
//It's definitely the scheduler that's throwing the null pointer exception.
}
// the else does basically the same as the if, only with a cron trigger instead of a simple one so I've omitted it.
I see that you are using a hosted service. Have you noticed that Quartz has that support already built-in?
Quartz cannot handle new jobs "new code that runs" dynamically, but triggers for sure. You just need to obtain a reference to IScheduler and then you can add new triggers pointing to existing job or just call scheduler.TriggerJob which will call your job once with given parameters (job data map is powerful feature to pass execution parameters).
I'd advice checking the GitHub repository and its examples, there a specific ones for different features and ASP.NET Core and worker integrations.
Generatlly Quartz already has database persistence support which you can use. Just call scheduler methods to add jobs and triggers - they will be persisted and available between application restarts (and take effect immediately without the need for restart).

Ultra low latency processes using Axon Framework

So, I'm working on a PoC for a low latency trading engine using axon and Spring Boot framework. Is it possible to achieve latency as low as 10 - 50ms for a single process flow? The process will include validations, orders, and risk management. I have done some initial tests on a simple app to update the order state and execute it and I'm clocking in 300ms+ in latency. Which got me curious as to how much can I optimize with Axon?
Edit:
The latency issue isn't related to Axon. Managed to get it down to ~5ms per process flow using an InMemoryEventStorageEngine and DisruptorCommandBus.
The flow of messages goes like this. NewOrderCommand(published from client) -> OrderCreated(published from aggregate) -> ExecuteOrder(published from saga) -> OrderExecutionRequested -> ConfirmOrderExecution(published from saga) -> OrderExecuted(published from aggregate)
Edit 2:
Finally switched over to Axon Server but as expected the average latency went up to ~150ms. Axon Server was installed using Docker. How do I optimize the application using AxonServer to achieve sub-millisecond latencies moving forward? Any pointers are appreciated.
Edit 3:
#Steven, based on your suggestions I have managed to bring down the latency to an average of 10ms, this is a good start ! However, is it possible to bring it down even further? As what I am testing now is just a small process out of a series of processes to be done like validations, risk management and position tracking before finally executing the order out. All of which should be done within 5ms or less. Worse case to tolerate is 10ms(These are the updated time budget). Also, do note below in the configs that the new readings are based on an InMemorySagaStore backed by a WeakReferenceCache. Really appreciate the help !
OrderAggregate:
#Aggregate
internal class OrderAggregate {
#AggregateIdentifier(routingKey = "orderId")
private lateinit var clientOrderId: String
private var orderId: String = UUID.randomUUID().toString()
private lateinit var state: OrderState
private lateinit var createdAtSource: LocalTime
private val log by Logger()
constructor() {}
#CommandHandler
constructor(command: NewOrderCommand) {
log.info("received new order command")
val (orderId, created) = command
apply(
OrderCreatedEvent(
clientOrderId = orderId,
created = created
)
)
}
#CommandHandler
fun handle(command: ConfirmOrderExecutionCommand) {
apply(OrderExecutedEvent(orderId = command.orderId, accountId = accountId))
}
#CommandHandler
fun execute(command: ExecuteOrderCommand) {
log.info("execute order event received")
apply(
OrderExecutionRequestedEvent(
clientOrderId = clientOrderId
)
)
}
#EventSourcingHandler
fun on(event: OrderCreatedEvent) {
log.info("order created event received")
clientOrderId = event.clientOrderId
createdAtSource = event.created
setState(Confirmed)
}
#EventSourcingHandler
fun on(event: OrderExecutedEvent) {
val now = LocalTime.now()
log.info(
"elapse to execute: ${
createdAtSource.until(
now,
MILLIS
)
}ms. created at source: $createdAtSource, now: $now"
)
setState(Executed)
}
private fun setState(state: OrderState) {
this.state = state
}
}
OrderManagerSaga:
#Profile("rabbit-executor")
#Saga(sagaStore = "sagaStore")
class OrderManagerSaga {
#Autowired
private lateinit var commandGateway: CommandGateway
#Autowired
private lateinit var executor: RabbitMarketOrderExecutor
private val log by Logger()
#StartSaga
#SagaEventHandler(associationProperty = "clientOrderId")
fun on(event: OrderCreatedEvent) {
log.info("saga received order created event")
commandGateway.send<Any>(ExecuteOrderCommand(orderId = event.clientOrderId, accountId = event.accountId))
}
#SagaEventHandler(associationProperty = "clientOrderId")
fun on(event: OrderExecutionRequestedEvent) {
log.info("saga received order execution requested event")
try {
//execute order
commandGateway.send<Any>(ConfirmOrderExecutionCommand(orderId = event.clientOrderId))
} catch (e: Exception) {
log.error("failed to send order: $e")
commandGateway.send<Any>(
RejectOrderCommand(
orderId = event.clientOrderId
)
)
}
}
}
Beans:
#Bean
fun eventSerializer(mapper: ObjectMapper): JacksonSerializer{
return JacksonSerializer.Builder()
.objectMapper(mapper)
.build()
}
#Bean
fun commandBusCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun sagaCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun associationsCache(): Cache {
return WeakReferenceCache()
}
#Bean
fun sagaStore(sagaCache: Cache, associationsCache: Cache): CachingSagaStore<Any>{
val sagaStore = InMemorySagaStore()
return CachingSagaStore.Builder<Any>()
.delegateSagaStore(sagaStore)
.associationsCache(associationsCache)
.sagaCache(sagaCache)
.build()
}
#Bean
fun commandBus(
commandBusCache: Cache,
orderAggregateFactory: SpringPrototypeAggregateFactory<Order>,
eventStore: EventStore,
txManager: TransactionManager,
axonConfiguration: AxonConfiguration,
snapshotter: SpringAggregateSnapshotter
): DisruptorCommandBus {
val commandBus = DisruptorCommandBus.builder()
.waitStrategy(BusySpinWaitStrategy())
.executor(Executors.newFixedThreadPool(8))
.publisherThreadCount(1)
.invokerThreadCount(1)
.transactionManager(txManager)
.cache(commandBusCache)
.messageMonitor(axonConfiguration.messageMonitor(DisruptorCommandBus::class.java, "commandBus"))
.build()
commandBus.registerHandlerInterceptor(CorrelationDataInterceptor(axonConfiguration.correlationDataProviders()))
return commandBus
}
Application.yml:
axon:
server:
enabled: true
eventhandling:
processors:
name:
mode: tracking
source: eventBus
serializer:
general : jackson
events : jackson
messages : jackson
Original Response
Your setup's description is thorough, but I think there are still some options I can recommend. This touches a bunch of locations within the Framework, so if anything's unclear on the suggestions given their position or goals within Axon, feel free to add a comment so that I can update my response.
Now, let's provide a list of the things I have in mind:
Set up snapshotting for aggregates if loading takes to long. Configurable with the AggregateLoadTimeSnapshotTriggerDefinition.
Introduces a cache for your aggregate. I'd start with trying out the WeakReferenceCache. If this doesn't suffice, it would be worth investigating the EhCache and JCache adapters. Or, construct your own. Here's the section on Aggregate caching, by the way.
Introduces a cache for your saga. I'd start with trying out the WeakReferenceCache. If this doesn't suffice, it would be worth investigating the EhCache and JCache adapters. Or, construct your own. Here's the section on Saga caching, by the way.
Do you really need a Saga in this setup? The process seems simple enough it could run within a regular Event Handling Component. If that's the case, not moving through the Saga flow will likely introduce a speed up too.
Have you tried optimizing the DisruptorCommandBus? Try playing with the WaitStrategy, publisher thread count, invoker thread count and the Executor used.
Try out the PooledStreamingEventProcessor (PSEP, for short) instead of the TrackingEventProcessor (TEP, for short). The former provides more configuration options. The defaults already provide a higher throughput compared to the TEP, by the way. Increasing the "batch size" allows you to ingest bigger amounts of events in one go. You can also change the Executor the PSEP uses for Event retrieval work (done by the coordinator) and Event processing (the worker executor is in charge of this).
There are also some things you can configure on Axon Server that might increase throughput. Try out the event.events-per-segment-prefetch, the event.read-buffer-size or command-thread. There might be other options that work, so it might be worth checking out the entire list of options here.
Although it's hard to deduce whether this will generate an immediate benefit, you could give the Axon Server runnable more memory / CPU. At least 2Gb heap and 4 cores. Playing with these numbers might just help too.
There's likely more to share, but these are the things I have on top of mind. Hope this helps you out somewhat David!
Second Response
To further deduce where we can achieve more performance, I think it would be essential to know what process your application is working on that take the longest. That will allow us to deduce what should be improved if we can improve it.
Have you tried making a thread dump to deduce what part's take up the most time? If you can share that as an update to your question, we can start thinking about the following steps.

autoStartup for #StreamListener

Unlike #KafkaListener, it looks like #StreamListener does not support the autoStartup parameter. Is there a way to achieve this same behavior for #StreamListener? Here's my use case:
I have a generic Spring application that can listen to any Kafka topic and write to its corresponding table in my database. For some topics, the volume is low and thus processing a single message with very low latency is fine. For other topics that are high volume, the code should receive a microbatch of messages and write to the database using Jdbc batch on a less frequent basis. Ideally the definition for the listeners would look something like this:
// low volume listener
#StreamListener(target = Sink.INPUT, autoStartup="${application.singleMessageListenerEnabled}")
public void handleSingleMessage(#Payload GenericRecord message) ...
// high volume listener
#StreamListener(target = Sink.INPUT, autoStartup="${application.multipleMessageListenerEnabled}")
public void handleMultipleMessages(#Payload List<GenericRecord> messageList) ...
For a low-volume topic, I would set application.singleMessageListenerEnabled to true and application.multipleMessageListenerEnabled to false, and vice versa for a high-volume topic. Thus, only one of the listeners would be actively listening for messages and the other not actively listening.
Is there a way to achieve this with #StreamListener?
First, please consider upgrading to functional programming model which would take you minutes to refactor. We've all but deprecated the annotation-based programming model.
If you do then what you're trying to accomplish is very easy:
#SpringBootApplication
public class SimpleStreamApplication {
public static void main(String[] args) throws Exception {
SpringApplication.run(SimpleStreamApplication.class);
}
#Bean
public Consumer<GenericRecord> singleRecordConsumer() {...}
#Bean
public Consumer<List<GenericRecord>> multipleRecordConsumer() {...}
}
Then you can simply use --spring.cloud.function.definition=singleRecordConsumer property for a single case and --spring.cloud.function.definition=multipleRecordConsumer when starting the application, this ensuring which specific listener you want to activate.

Using FirebaseRemoteConfig I am confused whether the setDefault method overwrites the cached Config values from the last fetch everytime we run it.

I am using a singleton Instance of the FirebaseRemoteConfig class which is generated using the following Provider method.
#Provides
#Singleton
FirebaseRemoteConfig provideFirebaseRemoteConfig() {
final FirebaseRemoteConfig mFirebaseRemoteConfig = FirebaseRemoteConfig.getInstance();
FirebaseRemoteConfigSettings configSettings = new FirebaseRemoteConfigSettings.Builder()
.setDeveloperModeEnabled(BuildConfig.DEBUG)
.build();
mFirebaseRemoteConfig.setConfigSettings(configSettings);
mFirebaseRemoteConfig.setDefaults(R.xml.remote_config_defaults);
long cacheExpiration = 3600 * 3; // 3 hours in seconds.
if (mFirebaseRemoteConfig.getInfo().getConfigSettings().isDeveloperModeEnabled()) {
cacheExpiration = 0;
}
mFirebaseRemoteConfig.fetch(cacheExpiration)
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
if (task.isSuccessful()) {
// Once the config is successfully fetched it must be activated before newly fetched
// values are returned.
mFirebaseRemoteConfig.activateFetched();
} else {
FirebaseCrash.log("RemoteConfig fetch failed at " +System.currentTimeMillis());
}
}
});
return mFirebaseRemoteConfig;
}
Now the issue here is that if I am setting the setDefaults method everytime I am generating the singleton instance and since the last fetched config values have an expiration time, doesn't it mean that the Config values will revert to the initial defaultvalues hardcoded instead of picking up the last known config fetched. That is in case of inability to fetch from the server after the last fetched Config values expire.
I tried looking at the Docs but there was no specific detail on how the whole caching works except for a simple overview. So people who have experience using RemoteConfig can easily answer this but I am using it for the first time so any help is appreciated.
Nope. setDefaults does not overwrite any previously fetched values you might have received from RemoteConfig.
From RemoteConfig's perspective, the "expiration time" does't mean that the previously fetched values are considered invalid. It just means that it's time for it to go out onto the network and see if any new values have appeared. If they haven't (or if it can't reach the network), RemoteConfig will keep whatever values it previously downloaded last time.

Solution for asynchronous notification upon future completion in GridGain needed

We are evaluating Grid Gain 6.5.5 at the moment as a potential solution for distribution of compute jobs over a grid.
The problem we are facing at the moment is a lack of a suitable asynchronous notification mechanism that will notify the sender asynchronously upon job completion (or future completion).
The prototype architecture is relatively simple and the core issue is presented in the pseudo code below (the full code cannot be published due to an NDA). *** Important - the code represents only the "problem", the possible solution in question is described in the text at the bottom together with the question.
//will be used as an entry point to the grid for each client that will submit jobs to the grid
public class GridClient{
//client node for submission that will be reused
private static Grid gNode = GridGain.start("config xml file goes here");
//provides the functionality of submitting multiple jobs to the grid for calculation
public int sendJobs2Grid(GridJob[] jobs){
Collection<GridCallable<GridJobOutput>> calls = new ArrayList<>();
for (final GridJob job : jobs) {
calls.add(new GridCallable<GridJobOutput>() {
#Override public GridJobOutput call() throws Exception {
GridJobOutput result = job.process();
return result;
}
});
}
GridFuture<Collection<GridJobOutput>> fut = this.gNode.compute().call(calls);
fut.listenAsync(new GridInClosure<GridFuture<Collection<GridJobOutput>>>(){
#Override public void apply(GridFuture<Collection<GridJobOutput>> jobsOutputCollection) {
Collection<GridJobOutput> jobsOutput;
try {
jobsOutput = jobsOutputCollection.get();
for(GridJobOutput currResult: jobsOutput){
//do something with the current job output BUT CANNOT call jobFinished(GridJobOutput out) method
//of sendJobs2Grid class here
}
} catch (GridException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
return calls.size();
}
//This function should be invoked asynchronously when the GridFuture is
//will invoke some processing/aggregation of the result for each submitted job
public void jobFinished(GridJobOutput out) {}
}
}
//represents a job type that is to be submitted to the grid
public class GridJob{
public GridJobOutput process(){}
}
Description:
The idea is that a GridClient instance will be used to in order to submit a list/array of jobs to the grid, notify the sender how many jobs were submitted and when the jobs are finished (asynchronously) is will perform some processing of the results. For the results processing part the "GridClient.jobFinished(GridJobOutput out)" method should be invoked.
Now getting to question at hand, we are aware of the GridInClosure interface that can be used with "GridFuture.listenAsync(GridInClosure> lsnr)"
in order to register a future listener.
The problem (if my understanding is correct) is that it is a good and pretty straightforward solution in case the result of the future is to be "processed" by code that is within the scope of the given GridInClosure. In our case we need to use the "GridClient.jobFinished(GridJobOutput out)" which is out of the scope.
Due to the fact that GridInClosure has a single argument R and it has to be of the same type as of GridFuture result it seems impossible to use this approach in a straightforward manner.
If I got it right till now then in order to use "GridFuture.listenAsync(..)" aproach the following has to be done:
GridClient will have to implement an interface granting access to the "jobFinished(..)" method let's name it GridJobFinishedListener.
GridJob will have to be "wrapped" in new class in order to have an additional property of GridJobFinishedListener type.
GridJobOutput will have to be "wrapped" in new class in order to have an addtional property of GridJobFinishedListener type.
When the GridJob will be done in addition to the "standard" result GridJobOutput will contain the corresponding GridJobFinishedListener reference.
Given the above modifications now GridInClosure can be used now and in the apply(GridJobOutput) method it will be possible to call the GridClient.jobFinished(GridJobOutput out) method through the GridJobFinishedListener interface.
So if till now I got it all right it seems a bit clumsy work around so I hope I have missed something and there is a much better way to handle this relatively simple case of asynchronous call back.
Looking forward to any helpful feedback, thanks a lot in advance.
Your code looks correct and I don't see any problems in calling jobFinished method from the future listener closure. You declared it as an anonymous class which always has a reference to the external class (GridClient in your case), therefore you have access to all variables and methods of GridClient instance.

Resources