I just discovered Axon framework and am very eager to use it as it answers a lot of my questions and concerns about DDD.
The first project I'd like to use it on contains small cameras which are controlled via the GigEVision protocol (over TCP and UDP for the control and stream channels). I think my problem would be the same for any case where we maintain a connection to an external component or more generally we want to link an external component lifecycle to Axon's lifecycles.
I'd like to have an Aggregate named Camera to which I can send Commands to grab 1 image or start grabbing N images at a certain FPS.
What I'm not sure about is how to manage the connection to an external component in my Aggregate.
- Should I inject the client to my camera in my Camera Aggregate and consider connecting to it as part of my protocol / business commands? In this case how would I link the camera lifecycle (a camera get disconnected all of a sudden) to the aggregate lifecyle (create a corresponding CameraDisconnectedEvent)?
- Should the connection be handled in a side car Saga which get the camera client injected, the saga starting on ConnectionRequestedEvent and stopping as soon as we get a connection error from the camera. I would get the same issue of linking the connection lifecycle to the lifecycle of the Saga I think.
- Am I leaking implementation details in the business layer and should manage the issue an other way?
- Am I just using the wrong tool for this job and should not try to force it into Axon?
Thank you very much in advance, hope my message and issues make sense.
Best regards,
First and foremost what you should do, is ensure the language spoken by the GigEVision protocol by no means transitions over into your other domain.
These two should be separate and remain so, as they cover different concerns.
This brings to light the necessity to have a translation layer of some sort.
More formuly called a context mapping. From a DDD perspective, you would take this even further by talking about an Anti-Corruption Layer.
The name already says it, you add a layer to ensure you are not corrupting your domain logic with that from another domain.
Another useful topic to read up on here would be the notion of a Bounded Context.
I digress though, let's go back to the problem at hand: where to position this anti-corruption layer.
What is currently unclear to me, is what domain requirements are in place why the connection is required to be maintained all the time when requesting for images.
Is the command you want to send requesting for a live feed? Or just "some" images from a given time frame?
In both scenarios I am not immediately convinced that any of these operations requires the validation through a single Camera aggregate to be honest.
You could still model this in a command and event format, as the messaging paradigm is very helpful to allow clean segregation of concerns.
But given the current description, I am uncertain whether you need DDD's idea of an Aggregate to model a single Aggregate in.
I might be wrong on this note, but I just don't know enough about your domain at this stage.
That's my two cents to the situation, hoping this helps!
Related
I was watching a video on "Design Uber api mock interview".
for most of the APIs, the candidate just used "userId" and left on the system to resolve "rideId"
Example - cancelRide(userId: string)
For this api, user is just passing userId to cancelRide endpoint and now the system will have to resolve rideId to actually cancel the ride.
Now this may solve the problem at hand but in future Uber may want to enable multiple ride for single user (you can book ride for yourself and your mom too, at the same time)
With this kind of API design now we will have to make changes to cancelRide endpoint to accept rideId as well
cancelRide(userId: string, rideId: string)
Should we just design as per the problem at hand and make changes to the API/Design if there is some new requirement is coming in or We should at least consider some obvious/possible future requirements/change?
You should think ahead: What is the potential requirement/change they need in the future?
The reason is you are developing an API.
Assume that you have more than 100K external clients of your API, any change will enforce them to update, modify, rebuild their APP and a lot of work.
Thus, the YAGNI principle is inappropriate to API design issue. You should try the Open-Closed Principle (OCP).
I've been building data driven applications for about 18 years and for the past two, I've been successfuly using angular for my large forms/crud based apps. You know, the classic sql server db with hundreds of tables with millons of records. So far, so good.
Now I'm porting/re-engineering a desktop app with about 50 forms, all complex, all fully functional, "smart". My approach for the last couple years was to simply work tightly with the backend rest API to retrieve, insert or update data as needed and everything works fine.
Then I stumbled across ngrx and I understand exactly how it works, what it does and why it is good for a "reactive" app.
My problem is the following: In the usual lifecycle of the kind of systems i mentioned, you always have to deal with fresh data and always have to tell everything to the server. Almost no data in such apps can be safely "stored" localy since transactional systems rely on centralized data interactions. There's no such thing as "hey lets keep this employee's sales here for later use".
So why would it be so important to manage a local 'store' when most of my data is volatile? I understand why it would be useful for global app data like user-profile or general ui related state, but for the core data itself? I dont get it. You query for data, plug that data in the form, it gets processed by the user and sent back to the server. That data is no longer needed, and if you do need it, you ask for it again, as it could have changed its state since the last time you interacted with it.
I do not understand the great lengths i have to go to mantain a local store and all the boilerplate if that state is so volatile.
They say change detection does not scale but I've build some really large web apps with a simple "http service" pattern and it works just fine, cause most of the component-tree is destroyed anyway as you go somewhere else in the app, and any previous subscriptions become useless. Even with large-bulky-kinky forms, it's never that big of a problem the inner workings of a form as to require external "aid" fro a store. The way I see it, the "state" of a form is a concern of that form in that moment alone. Is it to keep the component tree in sync? never had problems with that before... even for complicated trees with lots of shared data, master detail is kind of a flat pattern in the end if al lthe data is there.
For other components, such as grids, charts, reporte, etc, same thing applyes. They get the data they need and then "puf", gone.
So now you see my mindset. I AM trying to change it to something better. Why am I missing out the redux pattern?
I have a bit of experience here! It's all subjective, so what I've done may not suit you. My system is a complex system that sounds like it's on a similar scale as yours. I battled at first with the same issues of "why build complex logic on the front end and back end", and "why bother keeping stuff in state".
A redux/NGRX approach works for me because there are multiple ways data can be changed - perhaps it's a single user using the front end, perhaps it's another user making a change and I want to respond to that change straight away to avoid concurrency issues down the track. Perhaps there are multiple parts within my front end that can manipulate the same data.
At the back end, I use a CQRS pattern instead of a traditional REST API. Typically, one might suggest to re-implement the commands/queries to "reduce" changes to the state, however I opted for a different approach. I don't just want to send a big object graph back to the server and have it blindly insert, and I don't want to re-implement logic on the client and server.
My basic "use case" life cycle looks a bit like:
Load a list of data (limited size, not all attributes).
User selects item from list
Client requests "full" object/view/dto from server
Client stores response in object entity state
User starts modifying data
These changes are stored as "in progress" changes in a different part of state. The system is now responding to the data in the "in progress" part
If another change comes in from server, it doesn't overwrite the "in progress" data, but it does replace what is in the object entity state.
If required, UI shows that the underlying data has changed / is different to what user has entered / whatever.
User clicks on the "perform action" button, or otherwise triggers a command to be sent to server
server performs command. Any errors are returned, or success
server notifies client that change was successful, the client clears the "in progress" information
server notifies client that Entity X has been updated, client re-requests entity X and puts it into the object entity state. This notification is sent to all connected clients, so they can all behave appropriately.
With Hazelcast, I imagine a common scenario is that consumers want to know about the current State of the world (what ever is currently in the Map) and then updates with nothing lost in between.
Imagine a scenario where a Hazelcast map is holding some variety of data that a consumer wants to be streamed (maybe via Rx) the current state and then any updates from the listener.
The API suggests that we add a Listener for updates and treat the Map as a normal ConcurrentMap. However, while I'm enumerating the Map, updates may come in via the listener, so ensuring the correct order of items is hard.
We could share a lock between the map enumerator and the listener, but that seems a bit of a code smell.
So my question in general is, if we want to stream the SoTW and then updates, how can we do this? Is there anything built into Hazelcast that can help us?
Thanks for the help
First of all, and I guess it is just unluckily explained, a map has no order!
The second thing, a Hazelcast map is a non snapshotting, non-persistent data-structure. Just like a ConcurrentHashMap, changes to the data-structure are reflected to the iterator and vice-versa.
Events, on the other side, are a completely independent system, especially since events are delivered asynchronously. So it might happen, that the events actually arrives before you advanced the iterator up to the same element or the other round.
If you need a better guarantee, iterate first, add the listener, apply changes (you need to do a delta in a second iteration) between start of the first run and events that you might have missed. Not an actually nice way to handle it but I don't think there's another way.
I used the get room list function for getting the rooms in the server. But once i joined the particular room ,I am not able to see the list? How can i see the list even if i connected to room?
Short Answer: I'm afraid the answer is you can't - its not supported out of the box.
Long Answer:
Its per Design
The reason for this is the scalable architecture of Photon Loadbalancing
which is the server side application and part of the sdk that is usually
used in conjuncion with PUN. Loadbalancing is also what is driving the Photon Cloud in case that is what you are refering too.
What happens is that your client initially connects to what is called the Master, where the matchmaking takes place. Your client cn either create a new room or join a room using the list of rooms or any of the other matchmaking options like JoinRandom for instance.
Once any of this happens the master will send your client the information required to connect to the GameServer hosting the game.
In case of a new client it will send your client to the server with least load.
Under load the list of games and their properties can change very quickly
- games are created
- players join and leave
- games can reach their max allowed # of players
this changes are propagated from all the GameServers back to the Master
but it is not propagated to all GameServers, meanig its not available to your client once its in a room.
Options
There is a longer thread about this topic here:
http://forum.exitgames.com/viewtopic.php?f=17&t=3093
The gist is of the discussion is:
In case you are using the selfhosted version Photon (not the Photon Cloud) you could extend Loadbalancing.
You could connect a second peer and leave it connected to the master, with the disadantage that it is counted as a second ccu, which might be a problem for your costs since its all based on ccu.
Disatvantages of Game Lists
In general the use of game lists as design component in the games tends to be overrated - it is an easy way to have matchmaking during devlopment but doesn't work well (for most of the games) when going live.
Players usually don't care and match to games based on filters (there is no need for game lists for that since photon supports matchmaking with filters), usually leading to player choosing one of the first games on the list. Under load this often means that many player try to join the same game, which usually is a problem since most games need some kind of max # of players.
BTW: if they care its often "I wan't to play with a friend", which is also supported by the photon matchmaking.
If you have mobile clients the lists of games can be a burden specially with very active games, it takes a time to load the lists and it uses bandwith to load and mantain them. Due to the latancy its also more likely for players to select games that are already full.
You might be able to by creating a second instance of PhotonNetwork and connecting it to said server where the games are hosted, and have this instance a game object that also talks to the object that has your lobby script. Not to sure if this will work or how hard it would be to implement. I'm currently working with Photon myself.
very new to signalR, and have rolled up a very simple app that will take questions for moderation at conferences (felt like a straight forward use case)
I have 2 hubs at the moment:
- Question (for asking questions)
- Speaker (these should receive questions and allow moderation, but that will come later)
Solution lives at https://github.com/terrybrown/InterASK
After watching a video (by David Fowler/Damian Edwards) (http://channel9.msdn.com/Shows/Web+Camps+TV/Damian-Edwards-and-David-Fowler-Demonstrate-SignalR)
and another that I can't find the URL for atm, I thought I'd go with 'groups' as the concept to keep messages flowing to the right people.
I implemented IConnected, IDisconnect as I'd seen in one of the videos, and upon debugging I can see Connect fire (and on reload I can see disconnect fire), but it seems nothing I do adds a person to a group.
The signalR documentation suggests "Groups are not persisted on the server so applications are responsible for keeping track of what connections are in what groups so things like group count can be achieved" which I guess is telling me that I need to keep some method (static or otherwise?) of tracking who is in a group?
Certainly I don't seem able to send to groups currently, though I have no problem distributing to anyone currently connected to the app and implementing the same JS method (2 machines on the same page).
I suspect I'm just missing something - I read a few of the other questions on here, but none of them seem to mention IConnected/IDisconnect, which tells me these are either new (and nobody is using them) or that they're old (and nobody is using them).
I know this could be considered a subjective question, though what I'm looking for is just a simple means of managing the groups so that I can do what I want to - send a question from one hub, and have people connected to a different hub receive it - groups felt the cleanest solution for this?
Many thanks folks.
Terry
As you seem to understand, which groups a logical connection (user if you will) is in is something you the application writer is responsible for maintaining across network disconnects/reconnects. If you look at the way JabbR does this, it maintains the state of which "rooms" a user is in in its database. Upon reconnecting, a user's identity helps place the current connection back into the proper set of groups that represent the specific "rooms".