Multiple Hubs in SignalR (Design) - signalr

Using SignalR, I am wondering what the best way to set up my Hubs is under the following scenario: Say I have a web casino app (just for fun) and it has three games, Poker, Blackjack, and Slots. Poker and Blackjack are both multi-player so they have a chat feature and Slots does not. Okay, to support this I was thinking of setting up my Hubs in the following way.
BaseHub (Handles connection stuff that is common to Poker, Blackjack, and Slots)
PokerHub : BaseHub (Handles Poker game play)
BlackjackHub : BaseHub (Handles Blackjack game play)
SlotsHub : BaseHub (Handles Slots game play)
ChatHub (Handles chat features)
I was the thinking of having the Poker page of this web app connect to the PokerHub as well as the ChatHub and the Blackjack page would do something similar. The Slots page would obviously only connect to the SlotsHub.
Now, The things I am unsure about are: Should the Poker/Blackjack pages connect to both the PokerHub/BlackjackHub and the ChatHub or is there some way I could have them only connect to the PokerHub/BlackjackHub and delegate the chat features to the Chat hub? In that case I might create like an interface IHasChat or something like that. In either case should the ChatHub also extend the BaseHub? Currently the BaseHub only implements IConnected, IDisconnect and also handles basic Group functions (JoinGroup, LeaveGroup). Also, should the BaseHub be a shared instance (singleton)?
Lastly, if you think I am just going about it totally wrong please let me know. This is my first SignalR project and I know I am not an expert on it. Also, I know that there are actually several questions here. If you can answer any or all of them, either way I really appreciate it.
Thank You,
Tom

You can have as many hubs as you like as there is only ever one connection to the SignalR server. Hubs are an RPC implementation and all share the same connection.
The Wiki page on hubs for the js client shows a connection like so:
$.connection.hub.start()
Where hub is a namespace inside the js client.

Practically it never requires to have multiple hub classes on server. Its like you need multiple internet connections. One for surfing Games, Another for Social Media and third one for something different.
Create a single Hub Class on server side.
What you should do is have different Clients using JS.
For example in my website I had two things. Live chat and Offline notifications. For chat.aspx i had a variable called "chat" acting as Hub. For all other pages i had a variable called NotificationHub.
You can do something like below
var pocker = $.connection.hub;
var blackJack = $.connection.hub;
var other = $.connection.hub;
This way call the respective methods
Also if you want to identify who called your server method, you can attach query string parameter.
pocker.connection.qs = "type=pocker";

This has been out here for awhile, so you may not need the answer anymore, but... here goes nothing.
I'm new to SignalR, so I'm a little unsure of how your design would impact its performance. If that isn't an issue, I might consider an object model like this:
abstract BaseHub : Hub;
abstract MultiplayerHub : BaseHub; // (Handles chat and other MP necessities)
BlackJackHub : MultiplayerHub;
PokerHub : MultiplayerHub;
SlotsHub : BaseHub;
I can't think of any reasons why this design would cause any issues with SignalR, but again, I don't have a lot of experience to go on.

Related

How does one expose constants in a java google app engine Endpoints API?

Simple question -- how do you expose constants in a java google app engine Endpoints API?
e.g
public static final int CODE_FOO = 3845;
I'd like the client of the Endpoints to be able to match on CODE_FOO rather than on 3845. I'll end up doing enum wrappers (which probably is better anyway) but I'm just starting to be curious if this is even doable? Thx
Note that this isn't a full answer but here is a workaround: in Android Studio, create a very light-weight "common" java project and shove anything you want to keep in sync there such as constants as well as common types that you want exposed (e.g. an enum representing all possible return / error codes, etc).
This way you should get pretty decent compiler-time safety and keep these guys in sync.
Please feel free to comment if anyone has better suggestions.
This is unfortunately a Law of Information (ahem). If you have a message protocol you defined, both sides of the interaction need to be aware of the messages that could be passed. There's no other way for the client to be aware of what it needs to respond to. Ajax libraries hard-code the number "200" to be able to detect a successful request, as one example.
Yes, just use a switch statement on strings inside your client code. Or, you could use a dictionary of strings pointing to functions and just call the function after de-referencing the dictionary given the string you got.

Passing ViewModel from Presentation to Service - Is it Okay?

In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.

Extending/overriding System.Net.Mail.SmtpClient Send(message As MailMessage) method

Scenario
Around 20 ASP.net(VB) applications share the same code framework and when deployed also shares a common web.config. Throughout the various applications we use System.Net.Mail.SmtpClient/MailMessage to send e-mails and now we would like to implement an e-mail opt-out feature for our users with a minimal amount of change to the existing code. That leaves out the simplest approach; inheriting a class from SmtpClient, say OurSmtpClient, and override the Send() method to remove all users that have opted to not receive e-mails, as that would mean we would have to change all New SmtpClient() to New OurSmtpClient() throughout the apps.
Alternatives
We've previously used tagMapping to remap tags to our in-house, derived alternatives, are there anything similar for classes so that all SmtpClient automatically becomes OurSmtpClient and thus will use the overridden Send() method?
We've also looked at Extensions, but the problem here is that we can't override existing methods, only add new ones?
Next alternative we have considered is reflection, but we couldn't get our minds around on how to actually implement it.
Events .. Oh, if there was a Sending event ...
Code (cause everyone likes it)
Here is the inherit approach, just to understand what we are looking for:
Public Class OurSmtpClient
Inherits SmtpClient
Public Overloads Sub Send(message As MailMessage)
For i As Integer = message.To.Count - 1 To 0 Step -1
With message.To(i)
If (.Address.Contains("test")) Then
message.To.RemoveAt(i)
End If
End With
Next
MyBase.Send(message)
End Sub
End Class
Any suggestions? How can this be done without changing the code in the existing applications and only in the shared code (lives in App_Code in the apps) or the shared web.config?
I would change this at the data layer instead of in the mail client.
I'm assuming you store all the information about you users somewhere centrally, along with the information that they would rather not receive any further emails. So in my eyes, the chance would be to simple no longer return those users whenever you ask for the list of users to send emails to.
I don't know enough about the way your current applications work, but that does seem like the most convenient place to change it.
The fact that you are struggling to implement what should be a straightforward requirement is a big clue that you've built up too much technical debt. Your post conveys a strong reluctance to pay down technical debt. This is something that you must avoid and instead embrace Merciless Refactoring. Bite the bullet and introduce that specialised SMTP class.

Static functions or Events in Flex?

I'm working with an application which was originally designed to make heavy use of static-variables and functions to impose singleton-style access to objects. I've been utilizing Parsley to break apart some of the coupling in this project, and I'd like to start chiseling away at the use of static functions.
I will explain the basic architecture first, and then I'll ask my questions.
Within this application exists a static utility which sends/receives HTTP requests. When a component wishes to request data via HTTP, it invokes a static function in this class:
Utility.fetchUrl(url, parameters, successHandler, errorHandler);
This function calls another static function in a tracking component which monitors how long requests take, how many are sent, etc. The invocation looks very similar in the utility class:
public static function fetchUrl( ... ):void {
Tracker.startTracking(url, new Date());
...
Tracker.stopTracking(url, new Date());
}
Any components in this application wishing to dispatch an HTTP request must do so via the web utility class. This creates quite a bit of coupling between this class and other components, and is just one example of several where such reliance on static functions exists. This causes problems when we're extending and refactoring the system: I would like to decouple this using events.
Ultimately, I'd like each component to instantiate a custom event which is dispatched to a high-level framework. From there, the framework itself would relay the event to the correct location. In other words, those components which need to perform an HTTP request would dispatch an event like this:
dispatchEvent(new WebRequestEvent(url, parameters, successHandler, errorHandler));
From there, Parsley (or another framework) would make sure the event is sent to the correct location which could handle the functionality and perform whatever is necessary. This would also allow me a stepping-stone to introducing a more compartmentalized MVC architecture, where web request results are handled by models, injected by the framework into their own respective views.
I'd like to do the same with the tracking functionality.
Are there drawbacks from using an event-based mechanism, coupled with a framework like Parsley? Is it better to stick with static functions/variables and use the singleton-style access? If so, why? Will this end up causing more trouble in the future than it's worth?
So, short answer on Events drawbacks:
Slightly more weight on the system to use the events. Listeners, bubbling, capture, etc.. all have to be managed by the system. Much less of an issue when you're outside the display hierarchy, but still more expensive than straight functions. (then again, avoid pre-optimization, so get it right, then get it fast).
"Soft" circular dependencies can occur in complicated asynchronous systems. This means you end up with a case where A triggers an event. B notices A has changed, so updates C. C triggers an event. D notices C has changed and updates A. Doesn't usually max the CPU, but is still a non-terminating loop.
Sometimes you need to have forced buffering / locking of functions. Say component A and B both trigger the same event. You might not want C to be triggered twice (e.g., fetching new data from server) so you have to make sure C is marked as "busy" or something.
From personal experience, I haven't seen any other issues with event systems. I'm using the PureMVC framework in a relatively complicated system without issue.
Good luck.

WCF Services - splitting code into multiple classes

I'm currently looking at developing a WCF Service for the first time, but am a bit confused.
Say I've got a windows application that is going to call into this web service, that service is then going to call our data methods to retrieve and save the data.
Now, say for example we have 2 classes in our solution, Customer and Product. Do all methods within the service have to go into the same class file (e.g. MyService.svc), or can they be split into several classes replicating the main data layer, i.e. Customer.cs and Product.cs. If they can be split, how do these get called from within the windows forms application? Would each class be a different end point?
At the moment I can access the methods within the main class (e.g. MyService.svc), but I can't see any of the methods in the other classes, even though I have attributed them with "ServiceContract" and "OperationContract".
I have a feeling I'm missing something simple somewhere, just not sure what.
I would be grateful if some nice person could point me in the direction of a tutorial on doing this, as every tutorial I've found only includes the single class :)
Thanks in advance.
What you need to define is Data Contracts for your service
Theoretically, these data contracts could be your business entities (since 3.5 SP1 and its WCF poco support)
It's better though to create separate entities for your service and then to create conversion classes that can convert your business entities into service entities and the other way around
Actually, after loads of searching, I finally seemed to find what I was looking for just after posting my question (typical).
I've found the following page - http://www.scribd.com/doc/13136057/ChapterImplementing-a-WCF-Service-in-the-Real-World
Although I've not gone through it yet, it does look like it will cover what I'm after.
Apologies for wasting anyones time :) Hopefully this will be useful to someone else looking for the same thing.
It sounds like you only need one service. However, if you need to create multiple services. Consider this as an example.
[ServiceContract(Name = "Utility", Namespace = Constants.COMMON_SERVICE_NAMESPACE)]
public interface IService
[ServiceContract(Name="Documents", Namespace = Constants.DOCUMENTS_SERVICE_NAMESPACE)]
public interface IDocumentService
[ServiceContract(Name = "Lists", Namespace = Constants.LISTS_SERVICE_NAMESPACE)]
public interface IListService
Remember that you can create multiple data contracts inside a single service, and it is the best solution for a method that will require a reference to Customer(s) and Product(s).
It might help to take a look at MSDN's data contract example here.

Resources