I need to check memory continuously for a change to notify, and I use System.Threading.Timer to achieve it. I want the notification ASAP, so I need to tun callback method quite often, and I don't want cpu to use 100% to do this.
Can anybody tell me how should I set the interval of this timer? (I think it would be good to set it minimum as possible as)
Thanks
OK, so there is a very basic strategy for how you can be immediately notified of a modification to the dictionary without incurring any necessary CPU cycles and it involves using Monitor.Wait and Monitor.Pulse/Monitor.PulseAll.
On a very basic level, you have something like this:
public Dictionary<long, CometMessage> Messages = new Dictionary<long, CometMessage>();
public void ModifyDictionary(int key, CometMessage value)
{
Messages[key] = value;
Monitor.PulseAll(Messages);
}
public void CheckChanges()
{
while(true)
{
Monitor.Wait(Messages);
// The dictionary has changed!
// TODO: Do some work!
}
}
Now, this is very rudimentary and you could get all sorts of synchronization issues (read/write), so you should look into Marc Gravell's implementation of a blocking queue and apply the same logic to your dictionary (essentially making a blocking dictionary).
Furthermore, the above example will only let you know when the dictionary is modified, but it will not inform you of WHICH element was modified. It's probably better if you take the basics from above and design your system so you know which element was last modified by perhaps storing the key (e.g. last key) and just checking the value associated with it.
Related
I am starting out with the Axon framework and hit a bit of a roadblock.
While I can load individual aggregates using their ID, I can’t figure out how to get a list of all aggregates, or a list of all aggregate IDs.
The EventSourcingRepository class only has load() methods that return one aggregate.
Is there a way to all aggregate (IDs) or am I supposed to keep a list of all aggregate IDs outside of axon?
To keep things simple I am only using an InMemoryEventStorageEngine for now.
I am using Axon 3.0.7.
First off I'm was wondering why you would want to retrieve a complete list of all the aggregates from the Repository.
The Repository interface is set such that you can load an Aggregate to handle commands or to create a new Aggregate.
Asking the question you have, I'd almost guess you're using it for querying purposes rather than command handling.
This however isn't the intended use for the EventSourcingRepository.
One reason you'd want this I can think about, is that you want to implement an API call to publish a command to all Aggregates of a specific type in your application.
Taking that scenario then yes, you need to store the aggregateId references yourself.
But concluding with my earlier question: why do you want to retrieve a list of aggregates through the Repository interface?
Answer Update
Regarding your comment, I've added the following to my answer:
Axon helps you to set up your application with event sourcing in mind, but also with CQRS (Command Query Responsibility Segregation).
That thus means that the command and query side of your application are pulled apart.
The Aggregate Repository is the command side of your application, where you request to perform actions.
It thus does not provide a list-of-aggregates, as a command is an expression of intent on a aggregate. Hence it only requires the Repository user to retrieve one aggregate or create one.
The example you've got that you need of the list of Aggregates is the query side of your application.
The query side (your views/entities) is typically updated based on events (sourced through events).
For any query requirement you have in your application, you'd typically introduce a separate view tailored to your needs.
In your example, that means you'd introduce a Event Handling Component, listening to your Aggregate Events, which update a Repository with query models of your aggregate.
The EventStore passed into EventSourcingRepository implements StreamableMessageSource<M extends Message<?>> which is a means of reaching in for the aggregates.
Whilst doing it the framework way with an event handling component will probably scale better (depending on the how its used / the context), I'm pretty sure the event handling components are driven by StreamableMessageSource<M extends Message<?>> anyway. So if we wanted to skip the framework and just reach in, we could do it like this:
List<String> aggregates(StreamableMessageSource<Message<?>> eventStore) {
return immediatelyAvailableStream(eventStore.openStream(
eventStore.createTailToken() /* All events in the event store */
))
.filter(e -> e instanceof DomainEventMessage)
.map(e -> (DomainEventMessage) e)
.map(DomainEventMessage::getAggregateIdentifier)
.distinct()
.collect(Collectors.toList());
}
/*
Note that the stream returned by BlockingStream.asStream() will block / won't terminate
as it waits for future elements.
*/
static <M> Stream<M> immediatelyAvailableStream(final BlockingStream<M> messageStream) {
Iterator<M> iterator = new Iterator<M>() {
#Override
public boolean hasNext() {
return messageStream.hasNextAvailable();
}
#Override
public M next() {
try {
return messageStream.nextAvailable();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IllegalStateException("Didn't expect to be interrupted");
}
}
};
Spliterator<M> spliterator = Spliterators.spliteratorUnknownSize(iterator, Spliterator.ORDERED);
Stream stream = StreamSupport.stream(spliterator, false);
return (Stream)stream.onClose(messageStream::close);
}
I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?
The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.
In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)
With MR_contextForCurrentThread not being safe for Operations (and being deprecated), Im trying to ensure I understand the best pattern for series of read/writes in a concurrent operations.
It's been advised to use saveWithBlock for storing new records, and presumably deletion, which provides a context for use. The Count and fetch methods can be given a context, but still use MR_contextForCurrentThread by default.
Is the safest pattern to obtain a context using [NSManagedObjectContext MR_context] at the start of the operation, and use it for all actions. The operation depends on some async work, but not long running. Then perform a MR_saveToPersistentStoreWithCompletion when the operation is finished?
What's the reason for using an NSOperation? There are two options here:
Use MagicalRecord's background saving blocks:
[MagicalRecord saveWithBlock:^(NSManagedObjectContext *localContext) {
// Do your task for the background thread here
}];
The other option is (as you've already tried) to bundle it up into an NSOperation. Yes, I would cache an instance of a private queue context using [NSManagedObjectContext MR_newContext] (sorry, I deprecated the MR_context method this afternoon in favour of a clearer alternative). Be aware that unless you manually merge changes from other contexts, the private queue context that you create will be a snapshot of the parent context at the point in time that you created it. Generally that's not a problem for short running background tasks.
Managed Object Contexts are really lightweight and cheap to create — whenever you're about to do work on any thread other than the main thread, just initialise and use a new context. It keeps things simple. Personally, I favour the + saveWithBlock: and associated methods — they're just simple.
Hope that helps!
You can't use saveWithBlock from multiple threads (concurrent NSOperations) if you want to:
rely on create by primary attribute feature of Magical Record
rely on automatic establishment of relationships (which relies on primary attribute)
manually fetch/MR_find objects and do save based on result of it
This is because whenever you use saveWithBlock new local context created, so that multiple context created in the same time and they don't know about changes in each other. As Tony mentioned localContext is a snapshot of rootContext and changes goes only in one direction, from localContext to rootContext, but not vice versa.
Here is thread-save (or even consistency-safe in terms of MagicalRecord) method that synchronizes calls to saveWithBlock:
#implementation MagicalRecord (MyActions)
+ (void) my_saveWithBlock:(void(^)(NSManagedObjectContext *localContext))block completion:(MRSaveCompletionHandler)completion;
{
static dispatch_semaphore_t semaphore;
static dispatch_once_t once;
dispatch_once(&once, ^{
semaphore = dispatch_semaphore_create(1);
});
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[MagicalRecord saveWithBlock:block
completion:^(BOOL success, NSError *error) {
dispatch_semaphore_signal(semaphore);
if (completion){
completion(success, error);
}
}];
});
}
#end
Im using RobotLegs and Signals for my application. This is my first time using Robotlegs, and Im using Joel Hooks Signal Command Map example here
I've noticed that it seems quite verbose in contrast to events. For every Signal I have to create a new class, whereas with events I would group event types into one class.
I like how visually and instantly descriptive this is.. just browsing a signals package will reveal all app communications. Although it seems quite verbose to me.
Are other people using this, Is the way I'm using signals like this correct, or have people found a way around this verboseness?
Cheers
It's the correct way though. The major advantage of signals is you can include them in your interface definitions, but obviously you'll end up with a big pile of signals.
In general I use signals only for my view->mediator and service->command communication (1-to-1). For system wide notifications I use events (n-to-n). It makes the number of signals a bit more manageable.
But it's a matter of preference obviously.
Using a good IDE and/or a templating system alleviates a lot of the "pain" of having to create the various signals.
You do not have to make a new signal class for Command maps, its just good practice. You could just give the "dataType" class a type property - and do a switch on that. But that would be messy for commands. But note, Commands are basically for triggering application wide actions.
Not all signals trigger application wide actions.
For instance, if you are responding to a heap of events from a single View. I suggest making a Signal class for related "view events" (e.g. MyButtonSignal for MyButtonView) and give it a type property.
A typical signal of mine will look like:
package {
public class MyButtonSignal extends Signal {
public static const CLICK:String = 'myButtonClick';
public static const OVER:String = 'myButtonOver';
public function MyButtonSignal() {
super(String, Object);
}
}
}
Dispatch like so
myButtonSignal.dispatch(MyButtonSignal.CLICK, {name:'exit'});
Listen as normal:
myButtonSignal.add(doMyButtonSignal);
Handle signal like so:
protected function doMyButtonSignal(type:String, params:Object):void {
switch(type) {
case MyButtonSignal.CLICK: trace('click', params.name);
break;
case MyButtonSignal.OVER: trace('OVER', params.name);
break;
}
}
Occasionally its useful to give the data variable its own data class.
So everytime you realise "Aw shit, I need to react to another event", you simple go to the Signal and add a new static const to represent the event. Much like you (probably?) did when using Event objects.
For every Signal I have to create a new class, whereas with events I
would group event types into one class.
Instead of doing that you could just use the signal as a property… something like:
public var myCustomSignal:Signal = new Signal(String,String);
You can think of a signal as a property of your object/interface.
In Joel's example he's using signals to denote system level events and is mapping them with the robotlegs SignalMap which maps signals by type. Because they are mapped by type you need to create a unique type for each system level signal.
I am developing an application in GWT as my Bachelor's Thesis and I am fairly new to this. I have researched asynchronous callbacks on the internet. What I want to do is this: I want to handle the login of a user and display different data if they are an admin or a plain user.
My call looks like this:
serverCall.isAdmin(new AsyncCallback<Boolean>() {
public void onFailure(Throwable caught) {
//display error
}
public void onSuccess(Boolean admin) {
if (!admin){
//do something
}
else{
//do something else
}
}
});
Now, the code examples I have seen handle the data in the //do something// part directly. We discussed this with the person who is supervising me and I had the idea that I could fire an event upon success and when this event is fired load the page accordingly. Is this a good idea? Or should I stick with loading everything in the inner function? What confuses me about async callbacks is the fact that I can only use final variables inside the onSuccess function so I would rather not handle things in there - insight would be appreciated.
Thanks!
Since the inner-class/ anonymous function it is generated at runtime it needs a static memory reference to the variables it accesses. Putting final to a variable makes its memory address static, putting it to a safe memory region. The same happens if you reference a class field.
Its just standard java why you can only use Final variables inside an inner-class. Here is a great discussion discussing this topic.
When I use the AsyncCallback I do exactly what you suggested, I fire an event though GWT's EventBus. This allows several different parts of my application to respond when a user does log in.