signalR groups - connecting/disconnecting and sending - am I missing something? - signalr

very new to signalR, and have rolled up a very simple app that will take questions for moderation at conferences (felt like a straight forward use case)
I have 2 hubs at the moment:
- Question (for asking questions)
- Speaker (these should receive questions and allow moderation, but that will come later)
Solution lives at https://github.com/terrybrown/InterASK
After watching a video (by David Fowler/Damian Edwards) (http://channel9.msdn.com/Shows/Web+Camps+TV/Damian-Edwards-and-David-Fowler-Demonstrate-SignalR)
and another that I can't find the URL for atm, I thought I'd go with 'groups' as the concept to keep messages flowing to the right people.
I implemented IConnected, IDisconnect as I'd seen in one of the videos, and upon debugging I can see Connect fire (and on reload I can see disconnect fire), but it seems nothing I do adds a person to a group.
The signalR documentation suggests "Groups are not persisted on the server so applications are responsible for keeping track of what connections are in what groups so things like group count can be achieved" which I guess is telling me that I need to keep some method (static or otherwise?) of tracking who is in a group?
Certainly I don't seem able to send to groups currently, though I have no problem distributing to anyone currently connected to the app and implementing the same JS method (2 machines on the same page).
I suspect I'm just missing something - I read a few of the other questions on here, but none of them seem to mention IConnected/IDisconnect, which tells me these are either new (and nobody is using them) or that they're old (and nobody is using them).
I know this could be considered a subjective question, though what I'm looking for is just a simple means of managing the groups so that I can do what I want to - send a question from one hub, and have people connected to a different hub receive it - groups felt the cleanest solution for this?
Many thanks folks.
Terry

As you seem to understand, which groups a logical connection (user if you will) is in is something you the application writer is responsible for maintaining across network disconnects/reconnects. If you look at the way JabbR does this, it maintains the state of which "rooms" a user is in in its database. Upon reconnecting, a user's identity helps place the current connection back into the proper set of groups that represent the specific "rooms".

Related

What is a ProcessingGroup (#ProcessingGroup)?

I'm new to Axon Framework and struggling to understand what processing groups are and what they are used for.
If you guys can expand on this, it would be greatly appreciated.
I'm trying to have 2 instances of the same application running on different hosts with a single database (event store). However, I get the error below. The first host works fine but the second doesn't.
Should I assign different processing groups to them?
org.axonframework.eventhandling.tokenstore.UnableToClaimTokenException: Unable to claim token 'projections[0]'. It is owned by '1#xxxxx-yyyyy-zzzzz'
Regards,
Carlo
welcome!
Let me go through your doubts and try to help you :)
What is a Processing Group?
Processing Group is a logical way to group Event Handlers. You can define them on your Event Handling Component using the #ProcessingGroup("processingGroupName") annotation or, in case you do not provide a name, the default value is the full.package.name. Keep in mind that a Tracking Event Processor is created for each Processing Group.
What is a Processing Group used for?
As said before, it is very much related to the Tracking Event Processor. In this case, each TEP claims its Tracking Token (in order to avoid multiple processing of the same event in different threads/nodes). You can very much read about it in depth here.
What about your problem?
From the shared log and your comments stating you have 2 instances, it just means one instance already claimed that token and the other one can't claim it. If the first one releases it, they will race to claim it, meaning any of them can do it. There are ways to have both processing event in a parallel way and you can better check the docs here and here.
To sum up, there is nothing wrong with this log line and I believe you saw it as an INFO and not an ERROR.
Hope to have clarified your doubts about it.

Managing a connection to an external component in an Aggregate

I just discovered Axon framework and am very eager to use it as it answers a lot of my questions and concerns about DDD.
The first project I'd like to use it on contains small cameras which are controlled via the GigEVision protocol (over TCP and UDP for the control and stream channels). I think my problem would be the same for any case where we maintain a connection to an external component or more generally we want to link an external component lifecycle to Axon's lifecycles.
I'd like to have an Aggregate named Camera to which I can send Commands to grab 1 image or start grabbing N images at a certain FPS.
What I'm not sure about is how to manage the connection to an external component in my Aggregate.
- Should I inject the client to my camera in my Camera Aggregate and consider connecting to it as part of my protocol / business commands? In this case how would I link the camera lifecycle (a camera get disconnected all of a sudden) to the aggregate lifecyle (create a corresponding CameraDisconnectedEvent)?
- Should the connection be handled in a side car Saga which get the camera client injected, the saga starting on ConnectionRequestedEvent and stopping as soon as we get a connection error from the camera. I would get the same issue of linking the connection lifecycle to the lifecycle of the Saga I think.
- Am I leaking implementation details in the business layer and should manage the issue an other way?
- Am I just using the wrong tool for this job and should not try to force it into Axon?
Thank you very much in advance, hope my message and issues make sense.
Best regards,
First and foremost what you should do, is ensure the language spoken by the GigEVision protocol by no means transitions over into your other domain.
These two should be separate and remain so, as they cover different concerns.
This brings to light the necessity to have a translation layer of some sort.
More formuly called a context mapping. From a DDD perspective, you would take this even further by talking about an Anti-Corruption Layer.
The name already says it, you add a layer to ensure you are not corrupting your domain logic with that from another domain.
Another useful topic to read up on here would be the notion of a Bounded Context.
I digress though, let's go back to the problem at hand: where to position this anti-corruption layer.
What is currently unclear to me, is what domain requirements are in place why the connection is required to be maintained all the time when requesting for images.
Is the command you want to send requesting for a live feed? Or just "some" images from a given time frame?
In both scenarios I am not immediately convinced that any of these operations requires the validation through a single Camera aggregate to be honest.
You could still model this in a command and event format, as the messaging paradigm is very helpful to allow clean segregation of concerns.
But given the current description, I am uncertain whether you need DDD's idea of an Aggregate to model a single Aggregate in.
I might be wrong on this note, but I just don't know enough about your domain at this stage.
That's my two cents to the situation, hoping this helps!

Photon Networking - Getroomlist()

I used the get room list function for getting the rooms in the server. But once i joined the particular room ,I am not able to see the list? How can i see the list even if i connected to room?
Short Answer: I'm afraid the answer is you can't - its not supported out of the box.
Long Answer:
Its per Design
The reason for this is the scalable architecture of Photon Loadbalancing
which is the server side application and part of the sdk that is usually
used in conjuncion with PUN. Loadbalancing is also what is driving the Photon Cloud in case that is what you are refering too.
What happens is that your client initially connects to what is called the Master, where the matchmaking takes place. Your client cn either create a new room or join a room using the list of rooms or any of the other matchmaking options like JoinRandom for instance.
Once any of this happens the master will send your client the information required to connect to the GameServer hosting the game.
In case of a new client it will send your client to the server with least load.
Under load the list of games and their properties can change very quickly
- games are created
- players join and leave
- games can reach their max allowed # of players
this changes are propagated from all the GameServers back to the Master
but it is not propagated to all GameServers, meanig its not available to your client once its in a room.
Options
There is a longer thread about this topic here:
http://forum.exitgames.com/viewtopic.php?f=17&t=3093
The gist is of the discussion is:
In case you are using the selfhosted version Photon (not the Photon Cloud) you could extend Loadbalancing.
You could connect a second peer and leave it connected to the master, with the disadantage that it is counted as a second ccu, which might be a problem for your costs since its all based on ccu.
Disatvantages of Game Lists
In general the use of game lists as design component in the games tends to be overrated - it is an easy way to have matchmaking during devlopment but doesn't work well (for most of the games) when going live.
Players usually don't care and match to games based on filters (there is no need for game lists for that since photon supports matchmaking with filters), usually leading to player choosing one of the first games on the list. Under load this often means that many player try to join the same game, which usually is a problem since most games need some kind of max # of players.
BTW: if they care its often "I wan't to play with a friend", which is also supported by the photon matchmaking.
If you have mobile clients the lists of games can be a burden specially with very active games, it takes a time to load the lists and it uses bandwith to load and mantain them. Due to the latancy its also more likely for players to select games that are already full.
You might be able to by creating a second instance of PhotonNetwork and connecting it to said server where the games are hosted, and have this instance a game object that also talks to the object that has your lobby script. Not to sure if this will work or how hard it would be to implement. I'm currently working with Photon myself.

How to hack proof a data submission program

I am writing a score submission system for games where I need to ensure that reports back to the server are not falsified (aka, hacked).
I know that I can store a password or private passkey in the program to authenticate or encrypt the request but if the program is decompiled, a crafty hacker can extract the password/passkey and use it to falsify reports.
Does a perfect solution exist?
Thanks in advance.
No. All you can do is make it difficult for cheaters.
You don't say what environment you're running on, but it sounds like you're trying to solve a code authentication problem*: knowing that the code that is executing is actually what you think it is. This is a problem that has plagued online games forever and does not have a good solution.
Common ways in which such systems are commonly broken:
Capture, modification and replay of submissions to the server
Modifying the binary to allow cheating
Using a debugger to modify the submission in-memory before the program applies signatures/encryption/whatever
Punkbuster is an example of a system which attempts to solve some of these problems: http://en.wikipedia.org/wiki/PunkBuster
Also consider http://en.wikipedia.org/wiki/Cheating_in_online_games
Chances are, this is probably too hard for your game. Hiding a public key in your binary and signing everything that leaves it will probably put you well ahead of the pack, security-wise.
* Apologies, I don't actually remember what the formal name for this is. I keep thinking "running code authentication", but Google comes up with nothing for the term.
There is one thing you can do - record all of the user inputs and send those to the server as part of the submission. The server can then replay the inputs through a local copy of the game engine to determine the score. Obviously this isn't appropriate for every type of game, though. Depending on the game, you may need to include replay protection.
Another method that may be appropriate for some types of games is to include a video recording of the high-scoring play within the submission. Provide links to the videos from the high score table, along with a link to report suspicious entries. This will let you "crowd-source" cheat detection - if a cheater's score hits the table at number 1, then the players behind scores 2 through 10 have a pretty big incentive to validate the video for you. If a score is reported enough times, you can check the video yourself and decide if it should be removed (and the user banned).

What would be the best way to store the questions and responses for a survey where I need to keep the traffic on the database to a minimum?

Background
I am writing a survey that is going to a large audience. It contains 15 questions and there are five possible answers to each question along with potential comments.
The user can cycle through all 15 questions answering them in any order and is allowed to leave the survey at any point and return to answer the remaining questions.
Once an answer has been attempted on all 15 questions a submit button appears which allows them to submit the questions as final answers. Until that stage all answers are required to be retrievable whenever the user loads the survey page up.
The requirement is that the user only sees one question on a page and 'Previous' and 'Next' buttons allow the user to scroll through the questions.
Requirement
I could request the question each time the user clicks a button and save the current response and so on but that would be a large number of hits to a database that is already heavily used. I don't have the time to procure a new server etc so I have to make do with what I have. Is there any way I can cache the questions on the user machine and/or responses? Obviously I need the response data to be secure and only known to the user so I feel a little bit stuck as for the best way of doing this. Any pointers?
I am prepared to offer a bounty of 100 points on this question if it means I get some good quality discussion and feedback going.
Unless there's a reason for using a database, you could always store the results in flat files on the server itself. It doesn't sound like the data you're storing is relational in any way. Worst comes to worst, you could always insert them back into a relational db as a batch job every night.
Another option would be the application cache. However, if your web server suddenly crashes on you, you risk losing information from there.
You could also store the values in the user's cookies.
Based on my personal experience (serving thousands of short survey pages per second) I suspect your fears are unfounded. Among other reasons, the DBMS will cache such small amounts of data far more efficiently that you can.
I've tested this, loading the questions and answers into an Application-scope collection at start up, and serving them from memory after that - often it made no difference at all.
Your alternative is to send everything at once to the browser, and write it as a javascript application, storing the data in (encrypted) cookies and only hitting the database when the whole thing is done. This is tedious but not difficult.
You have three requirements that need to be balanced:
users must be able to return to their survey at any time
answers entered by users must be saved with the least possible chance of data loss
need to minimize database hits
Any solution that involves caching answers in a volatile place (cookies, session, etc) will increase the risk of data loss. The final solution depends on how you rank the three requirements in importance. If the db issue is at the top, then you will either need to risk data loss, or spend a lot of extra time coding a solution using some temporary storage scheme (like Kevin's flat file idea).
A couple of folks suggested that you may be optimizing prematurely. I suggest you consider that idea first - maybe this whole thing is moot.
However, assuming that your db situation is a real problem, I think your best balance of requirements will be a system that saves answer to the db immediately (to prevent data loss) but carefully manages when you actually have to hit the db.
When the app starts up (or when the first user requests the survey) load the survey and its questions into application cache. If any of the questions have a pick list of possible answers, load these also. You will only have to hit the db once during the application lifetime (or your cache duration) to load survey data.
When a user starts their survey, run a single query to load any existing answers (in case they are a returning user) into an object in session - could be as simple as a <List>string. (If you can somehow identify a new user without having to hit the db, then you can skip this step for new users.)
Use the session answer object along with the survey question object in app cache to populate each page without hitting the db again.
When the user submits an answer, compare it to the session answer object to see if it has changed (she may be just clicking 'next' on a page with a previously entered answer). If the answer is new, or has changed, the save it to the db and to the session answer object.
When the user leaves the survey, you don't need to do anything - everything is saved already.
With this scheme, you hit the db once to load the survey, once for each user when they start (or restart) the survey, and once for each new or modified answer. Probably not as much of a reduction as you were hoping for, but it gives you the best data protection.
If the database trips are a problem, you can cache them in the web server (or wherever your application resides) but it sounds like each answer needs to be recorded as the user goes to the next question.
If the questions and possible answers are identical for everyone, I would definitely cache them in the application layer - this can be stored in the Application object. In any case, you could certainly optimize the database calls to return the results as efficiently as possible - i.e. multiple result sets or a joined result set from a single stored proc. If you don't mind multiple copes for each session (or if there is variation), you can stored it in the Session object. Storing it on the client (i.e. a cookie) is not really secure and kind of pointless from a web server-client bandwidth saving persepective.
This sounds a lot like premature optimization to me, though.
Your scenario is a perfect candidate for Predictive Fetch Pattern. I would suggest that you cache all your questions. When the user signs in use the pattern to fetch the first 5 answers (if they have given any answers) and based on their navigation (where their current question is) get the information from the Response object or from the DB.
HTH
Not sure of the languages etc you are using, but most have an application cache. I would store the questions there, and retrieve them from the database and store them when they are not in the cache (when the application recycles).
As for the answers, are the users logging in some how? Is it feasible to save answers in a cookie until all questions are answered?
Edit:
If cookies aren't reliable enough, you could store (in the application cache) a list of queries (inserts/updates) to be executed, they would not be executed until an a query limit was reached or under certain conditions (i.e. execute the query list when a user requests answers that are in the list, execute list when the application recycles, etc).
Pretty crude, but you get the idea:
if (function == "get question" && userQuestionIsInQueue) || function == "finish survey"
execute(Application["querylist"]);
continue as normal...
if function == "submit answer"
if Application["querylist"] == null
Application["querylist"] = newAnswerQuery;
else
Application["querylist"] += newAnswerQuery;
You'd also need to add execute(Application["querylist"]) to the recycle event, I believe you can hook it in the global.asax
Edit 2:
I would also accumulate all database transactions for a request into 1, you if you did have to execute the list, then followed by getting the answer for the user, do them in the same transaction and save a trip. Common practice when optimizing.
This is a classic problem to do with maintaining state between pages in a browser based system. Im also assuming that we want this data to persist even if the user logs out and comes back later. Here are the options:
With a high availability server we can keep a single collection of 15 answers in memory (not session) for this user (probably not a good idea and not easily load balanced)
We denormalise the 15 answers into 1 row of a sql table
We persist the data on the client using a cookie or localStorage (IE8).
My feeling is that the first two options are probably not what you are looking for, so lets explore the last option.
You could quite simply store the answers in a cookie. There is a small chance that this could get lost, and that the user may log in from another machine, but this may be an acceptable risk. With with latest browsers that support HTML5 (inc IE8 afaik) you get the benefit of localStorage which is not as easily deleted as a cookie. You could fall back to cookies if this wasnt available.
Cookies can be encrypted if required.
I would like to offer you the new feature of HTML5 which is called Dom Storage but since only the new browsers are supporting it, it could be a problem using it at this point.
With DOM Storage, you can store data on user browser. Since it can store up to 5MB per domain in Mozilla Firefox[3], Google Chrome, and Opera, 10MB per storage area in Internet Explorer, you can store answers and question ids in the DOM Storage.
Even with DOM Storage, let alone Database hit, you can reduce server hits as well.
Since we all know working with cookies is hassle sometimes and it can store 4kb, the easiest way is now to store key-value information in DOM Storage.
You can store key-value information specifically for sessions as well as locally. When session ends, the session based info will be wiped off from the browser but if you store local based values, even the user closes the tab, the key-value will remain for a while.
Example Code:
<p>
You have viewed this page
<span id="count">an untold number of</span>
time(s).
</p>
<script>
var storage = window.localStorage;
if (!storage.pageLoadCount) storage.pageLoadCount = 0;
storage.pageLoadCount = parseInt(storage.pageLoadCount, 10) + 1;
document.getElementById('count').innerHTML = storage.pageLoadCount;
</script>
You can learn more about DOM Storage from the links below :
https://developer.mozilla.org/en/DOM/Storage
http://en.wikipedia.org/wiki/Web_Storage
http://msdn.microsoft.com/en-us/library/cc197062%28VS.85%29.aspx
do you mean...a cookie?

Resources