I'm starting from a functioning SignalR web application with an ActivityHub class derived from a SignalR Hub to manage client connections and activities. Similar to the stock ticker tutorial, there is also a singleton ActivityTimer class that uses a System.Threading.Timer to periodically broadcast to all clients via the hub context it gets in its constructor, like this:
activityHubContext = GlobalHost.ConnectionManager.GetHubContext<ActivityHub>();
Now I want to turn my ActivityHub into a base class with sub-classes for different kinds of activities, overriding a few methods in ActivityHub for activity-specific behaviors, and using activity-specific clients which each reference the appropriate activity sub-class (e.g., var activityHub = $.connection.coreActivityHub).
The sub-classing works for the hub server code and clients, and the ActivityTimer fires timer events as intended, but the ActivityTimer calls no longer reach the clients. If I get the hub context for a specific activity sub-class, it works again, but only for that sub-class:
activityHubContext = GlobalHost.ConnectionManager.GetHubContext<CoreActivityHub>();
Is there a way to have a single, generic ActivityTimer that will work with all sub-classes of ActivityHub? Can the ActivityTimer call some method in the base ActivityHub class rather than trying to reach all the clients directly (the base class seems to have no problems calling Clients.All.doSomething())?
In case it simplifies things (or makes possible an otherwise challenging solution), the application will only be running one type of activity at a time -- all clients will be on the same activity at one time.
In working on a different issue in the same project, I came across this, which points to this, where I also found this (all worth a quick read if the topic interests you). They provide one way to do what I was trying to do: have a method in the base class that can be called from "outside" to reach clients of any/all sub-classes. (They also helped me to think more clearly about the hub context, and why I believe my original ActivityTimer cannot work with sub-classes -- see note at the end of this answer for further explanation).
The solution to my problem is to create a method in the base class that does the call to the clients, and then call this new method from the ActivityTimer to reach the clients indirectly. This approach does not rely on having a hub context within ActivityTimer, and it frees us from worry about sub-classes because it calls into the base class explicitly:
Create a static field in the base class to hold the base class's hub context:
private static IHubContext thisHubContext;
Set this hub context in each sub-class's constructor with that class as the type passed to GetHubContext():
thisHubContext =
GlobalHost.ConnectionManager.GetHubContext<CoreActivityHub>();
Create a static method in the base class that calls the desired client-side method(s); note that you could use other options than Clients.All to reach a subset of clients (for example, the arg might designate a SignalR group to reach):
public static void DoSomething(string someArg)
{
thisHubContext.Clients.All.doSomething(someArg);
}
Call this base-class method from any server code that is "outside" the hub. In my case, I call it from the timer event handler in ActivityTimer:
ActivityHub.DoSomething("foo");
The messages will get through to the clients specified in the static method.
NB: this solution only works for the particular case mentioned at the end of the original post, in which only one sub-class is ever in use at a time, because each sub-class sets the base class static hub context to its own context. I have not yet tried to find a way around this limitation.
Note: I don't think it's possible to have "outside-the-hub" server code work with sub-classes by way of a stored hub context. In my original, functioning app (before I tried to create sub-classes of the ActivityHub), the ActivityTimer talks to the clients by means of a hub context that it gets on instantiation:
public ActivityTimer()
{
activityHubContext = GlobalHost.ConnectionManager.GetHubContext<ActivityHub>();
activityTimer = new Timer(DoSomething, null, TimerInterval, TimerInterval);
}
public void DoSomething(object state)
{
activityHubContext.Clients.All.doSomething("foo");
}
Because the hub context is obtained by explicit reference to a particular class (in this case, ActivityHub), it will not work with a sub-class. If instead (as I mentioned trying in my original post) I get the hub context for a particular sub-class, the timer will now work for instances of that sub-class, but not other sub-classes; again, the problem is that the hub context is obtained for a particular sub-class.
I don't think there's any way around this, so I think the only solution is the one outlined above in which the base class method uses the hub context set by the sub-class constructor, and the outside code calls the base class method to get to the clients by way of that sub-class context.
However, I'm still on the SignalR learning curve (among others) so will appreciate any corrections or clarifications!
Related
Apologies in advance for the basic question and please ignore the mix of kotlin/java!
I’ve spun up a very simple example building upon the example-cordapp and I wish to demonstrate the ability to override flows to put in some additional node operator specific logic from another cordapp. For example: a certain node owner may not want to do business under certain scenarios, so add in some additional checks prior to the initiating or responder signing phase.
I’ve successfully overridden the responder flow fine and I can see it being executed on the node with the extending cordapp however I’m not having much luck with initiator flow.
From reading here: https://www.corda.net/blog/extending-and-overriding-flows-from-external-cordapps/ it suggests that I would have to have my api/rpc client directly invoke the extended version directly, however I was hoping it would work similar to the responder flow and automatically pick it up based on the hops.
Base flow:
public class BondFlow {
#InitiatingFlow
#StartableByRPC
public static class Initiator extends FlowLogic<SignedTransaction> {
// Stuff
public Initiator(int bondValue, Party obligee, Party principal) {
this.bondValue = bondValue;
this.obligee = obligee;
this.principal = principal;
}
// More Stuff
Overridden (in a separate Cordapp):
public class MyCustomFlow {
#StartableByRPC
public static class Initiator extends BondFlow.Initiator {
// Stuff
public Initiator(int bondValue, Party obligee, Party principal) {
super(bondValue, obligee, principal);
}
// More stuff
My RPC client just calls the Initiator as you may expect:
val signedTx = proxy.startTrackedFlow(::Initiator, bondValue, obligeeParty, principalParty).returnValue.getOrThrow()
I could change my api/rpc client call to allow configuration of the initiator flow to be called but I'd like to understand if there is an alternative.
Many Thanks
I would honestly suggest that you give flows different names for each different flows. We had named a couple of our flows simple as initiator and responder out of the simplicity purpose.
However, it is still fine to have same names cross different CorDapps. You just need to call their full name including the package name.
Run flow list in your node shell and you should see it.
For example:
our signature YoFlow in the yo-Cordapp has a full name of net.corda.examples.yo.flows.YoFlow,
but in single cordapp scenario, you can just call it by run flow start YoFlow
For ASP.NET Web API, I've been working on my own implementation of IHttpControllerActivator and am left wondering when (or why?) to use the HttpRequestMessage extension method "RegisterForDispose".
I see examples like this, and I can see the relevance in it, since IHttpController doesn't inherit IDisposable, and an implementation of IHttpController doesn't guarantee its own dispose logic.
public IHttpController Create(HttpRequestMessage request, HttpControllerDescriptor controllerDescriptor, Type controllerType)
{
var controller = (IHttpController) _kernel.Get(controllerType);
request.RegisterForDispose( new Release(()=> _kernel.Release(controller)));
return controller;
}
But then I see something like this and begin to wonder:
public IHttpController Create(
HttpRequestMessage request,
HttpControllerDescriptor controllerDescriptor,
Type controllerType)
{
if (controllerType == typeof(RootController))
{
var disposableQuery = new DisposableStatusQuery();
request.RegisterForDispose(disposableQuery);
return new RootController(disposableQuery);
}
return null;
}
In this instance RootController isn't registered for disposal here, presumably because its an ApiController or MVC Controller? - And thus will dispose itself.
The instance of DisposableStatusQuery is registered for disposal since it's a disposable object, but why couldn't the controller dispose of the instance itself? RootController has knowledge of disposableQuery (or rather, it's interface or abstract base), so would know it's disposable.
When would I actually need to use HttpRequestMessage.RegisterForDispose?
One scenario I've found it useful for: for a custom ActionFilter.
Because the Attribute is cached/re-used, items within the Attribute shouldn't rely on the controller to be disposed of (to my understanding - and probably with caveats)... so in order to create a custom attribute which isn't tied to a particular controller type/implementation, you can use this technique to clean up your stuff. In my case, it's for an ambient DbContextScope attribute.
RegisterForDispose it's a hook that will be called when the request is disposed. This is often used along with "some" of the dependency injection containers.
For instance, some containers (like Castle.Windsor) by default will track all dependencies that they resolve. This is according to Windsor ReleasePolicy LifecycledComponentsReleasePolicy which states that it will keep track of all components that were created. In other words your garbage collector will not be able to cleanup if your container still tracks your component. Which will result into memory leaks.
So for example when you define your own IHttpControllerActivator to use it with a dependency injection container it is in order to resolve the concrete controller and all its dependencies. At the end of the request you need to release all the created dependencies by the container otherwise you will end with a big memory leak. You have this opportunity doing it with RegisterForDispose
I use RegisterForDispose with the DI container's. Based on Blog post I have implemented to dispose the container(Nested Container) after each request so that it clears all the objects which i has created.
One may want to hook code around the life cycle of a request that (1) has little to do with controllers and (2) does not subclass the request type.
I would imagine the idiomatic form of such code takes the shape of extension methods on HttpRequestMessage, for example. If the code allocates disposable resources, it would need to hook the disposal code to something. I'm not too familiar with the various extension points of the ASP.NET pipeline, but I suppose hooking code just to dispose of resources at the end of the request processing stage was common enough to justify a dedicated registration mechanism for disposable resources (as opposed to more generally subscribing code to be executed).
Since you're asking, I found a nice example scenario in this sample. Here, an Entity Framework context is set as a property of the request, and must be disposed of properly. While this property is intended to be used by controllers, they're not specific to any controller or controller super-class, so in my opinion this is a very sensible design choice. If you're curious why, this is because these requests are "OData batch requests" and controller actions will be invoked multiple times over the lifetime of each request (once per "operation"). Certain operations are grouped into atomic "changesets" that must be wrapped in transactions at a higher-level than controllers (a dedicated mechanism is used: an ODataBatchHandler, so that the controllers themselves are oblivious to this). Hence, controllers alone are not enough, as one cannot have them dispose of the context themselves in this scenario.
Hope this helps.
I've seen a lot about UnitOfWork and Repo Pattern on the web but still don't have a clear understanding of why and when to use -- its somewhat confusing to me.
Considering I can make my repositories testable by using DI thru the use of an IoC as suggested in this post What are best practices for managing DataContext. I'm considering passing in a context as a dependency on my repository constructor then disposing of it like so?:
public interface ICustomObjectContext : IDisposable {}
public IRepository<T> // Not sure if I need to reference IDisposable here
public IMyRepository : IRepository<MyRepository> {}
public class MyRepository : IMyRepository
{
private readonly ICustomObjectContext _customObjectContext;
public MyRepository(ICustomObjectContext customObjectContext)
{
_customObjectContext = customObjectContext;
}
public void Dispose()
{
if (_customObjectContext != null)
{
_customObjectContext.Dispose();
}
}
...
}
My current understanding of using UnitOfWork with Repository Pattern, is to perform an operation across multiple repositories -- this behavior seems to contradict what #Ladislav Mrnka recommends for web applications:
For web applications use single context per request. For web services use single context per call. In WinForms or WPF application use single context per form or per presenter. There can be some special requirements which will not allow to use this approach but in most situation this is enough.
See the full answer here
If I understand him correctly the DataContext should be shortlived and used on a per request or presenter basis (seen this in other posts as well). In this case it would be appropriate for the repo to perform operations against the context since the scope is limited to the component using it -- right?
My repos are registered in the IoC as transient, so I should get a new one with each request. If that's correct, then I should be getting a new context (with code above) with each request as well and then disposing of it -- that said...Why would I use the UnitOfWork Pattern with the Repository Pattern if I'm following the convention above?
As far as I understand the Unit of Work pattern doesn't necessarily cover multiple contexts. It just encapsulates a single operation or -- well -- unit of work, similar to a transaction.
Creating your context basically starts a Unit of Work; calling DbContext.SaveChanges() finishes it.
I'd even go so far as to say that in its current implementation Entity Framework's DbContext / ObjectContext resembles both the repository pattern and the unit of work pattern.
I would use a simplified UoW if i wanted to push context's SaveChanges away from the repositories when they share the same instance of context across one web request.
I imagine you have sth like Save() method on your repositories that looks similiar to _customObjectContext.SaveChanges(). Now lets assume you have two methods containing business logic and using repos to persist changes in DB. For the sake of simplicity we'll call them MethodA and MethodB, both of them containing a fair amount of logic for performing some activities. MethodA is used separately in the system but also it is called by MethodB for some reason. What happens is MethodA saves changes on some repository and since we are still in the same request changes made in MethodB, before it called MethodA, will also be saved regardless of whether we want it or not. So in such case we unintentionally break the transaction inside MethodB and make the code harder to understand.
I hope i described this clear enough, it wasn't easy. Anyway other than that i cannot see why UoW would be helpful in your scenario. As Dennis Traub pointed quite correctly ObjectContext and DbContext are in fact an implementation of a UoW so you'd be probably reinventing the wheel while implementing it on your own.
The ObjectContext/DbContext is an implementation of the UnitOfWork pattern. It encapsulates several operations and makes sure they are submitted in one transaction to the database.
The only thing you are doing is wrapping it in your own class to make sure you're not depending on a specific implementation in the rest of your code.
In your case, the problem lies in the fact that your Context shouldn't be disposed of by your Repository. The Repository is not the one that instantiates the Context, so it shouldn't dispose of it either. The UnitOfWork that encapsulates multiple repositories is responsible for creating and disposing the Context and you will call a Save method on your UnitOfWork.
Code can look like this:
using (IUnitOfWork unitOfWork = new UnitOfWork())
{
PersonRepository personRepository = new PersonRepository(unitOfWork);
var person = personRepository.FindById(personId);
ProductRepository productRepository = new ProductRepository(unitOfWork);
var product= productRepository.FindById(productId);
p.CreateOrder(orderId, product);
personRepository.Save();
}
I have a class called CommunicationManager which is responsible for communication with server.
It includes methods login() and onLoginResponse(). In case of user login the method login() has to be called and when the server responds the method onLoginResponse() is executed.
What I want to do is to bind actions with user interface. In the GUI class I created an instance of CommunicationManager called mCommunicationManager. From GUI class the login() method is simply called by the line
mCommunicationManager.login();
What I don't know how to do is binding the method from GUI class to onLoginResponse(). For example if the GUI class includes the method notifyUser() which displays the message received from theserver.
I would really appreciate if anyone could show how to bind methods in order to execute the method from GUI class (ex. GUI.notifyUser()) when the instance of the class mCommunicationManager receives the message from the server and the method CommunicationManager.onLoginResponse() is executed.
Thanks!
There's two patterns here I can see you using. One is the publish/subscribe or observer pattern mentioned by Pete. I think this is probably what you want, but seeing as the question mentions binding a method for later execution, I thought I should mention the Command pattern.
The Command pattern is basically a work-around for the fact that java does not treat methods (functions) as first class objects and it's thus impossible to pass them around. Instead, you create an interface that can be passed around and that encapsulates the necessary information about how to call the original method.
So for your example:
interface Command {
public void execute();
}
and you then pass in an instance of this command when you execute the login() function (untested, I always forget how to get anonymous classes right):
final GUI target = this;
command = new Command() {
#Override
public void execute() {
target.notifyUser();
}
};
mCommunicationManager.login(command);
And in the login() function (manager saves reference to command):
public void login() {
command.execute();
}
edit:
I should probably mention that, while this is the general explanation of how it works, in Java there is already some plumbing for this purpose, namely the ActionListener and related classes (actionPerformed() is basically the execute() in Command). These are mostly intended to be used with the AWT and/or Swing classes though, and thus have features specific to that use case.
The idiom used in Java to achieve callback behaviour is Listeners. Construct an interface with methods for the events you want, have a mechanism for registering listener object with the source of the events. When an event occurs, call the corresponding method on each registered listener. This is a common pattern for AWT and Swing events; for a randomly chosen example see FocusListener and the corresponding FocusEvent object.
Note that all the events in Java AWT and Swing inherit ultimately from EventObject, and the convention is to call the listener SomethingListener and the event SomethingEvent. Although you can get away with naming your code whatever you like, it's easier to maintain code which sticks with the conventions of the platform.
As far as I know Java does not support method binding or delegates like C# does.
You may have to implement this via Interfaces (e.g. like Command listener.).
Maybe this website will be helpful:
http://www.javaworld.com/javaworld/javatips/jw-javatip10.html
You can look at the swt-snippets (look at the listeners)
http://www.eclipse.org/swt/snippets/
or you use the runnable class , by overwritting the run method with your 'callback'-code when you create an instance
I've got a number of modules in a Prism application which load data that takes 3-8 seconds to get from a service.
I would like to be able to say in my bootstrapper something like this:
PSEUDO-CODE:
Customers allCustomers = Preloader(Models.GetAllCustomers);
And this would run in a background thread and when the user actually needs the variable "allCustomers" it would be fully loaded.
Is there an automatic service in Prism/Unity which does this type of preloading?
No, there is not.
However...
What you can consider is adding your ViewModel with a ContainerControlledLifetime to the container in your ConfigureContainer method that the views can use. You'd kickoff your threaded request in the constructor of your ViewModel and allow Views to pull this ViewModel out of the Container.
Even if they grab the ViewModel out of the container before the GetAllCustomers method is done firing, they will be notified correctly if the property you store the customers in implements INotifyPropertyChanged correctly.
If it was more appropriate, you could also do this from the Modules (in the Initialize method), rather than in the bootstrapper (for instance, if your Module was what actually knew about your Customer's Model).