RPC synchronization alternative MMORPG, Photon Unity3D - networking

I'm using Photon Unity Networking and I am working on a little game of mine.
I got to a point where I have a room with players and mobs.
When a new player appears I use RPC call to update information about this player to all other connected users to get synchronized.
The problem is.. that this new player does not have any information about the rest of the room (his info is not up to date).
I mean for instance current health of other players, or current health of mobs, etc.
The only only solution I came up with is to send an RPC to a master client, pass through all volatile objects around and send several RPC calls back to the new player with this update.
What I am asking is... do I really have to it like this? Or is there any other way, any better or simpler way?

Okay so the phonton networking works via photon network view - and its observed components, means scripts
in this observed script you have to pass (if its your character and you are controlling it)
m_PhotonView = GetComponent<PhotonView>(); //Variable
if( m_PhotonView.isMine == true ) //in Void Update()
all variables you need, position, rotation, name, health, relevant data for animations and so on by using SetSynchronizedValues()
Variable = GetComponent<PhotonTransformView>();
Variable .SetSynchronizedValues( Position, Health , Name);
and it will synchronise the Variables, then you have to use them (display the name, set the object to the correct position , show a health bar and resize it) if it's an non controlled character only
if( m_PhotonView.isMine == false)
Hope I could help you

Related

Returning multiple items in gRPC: repeated List or stream single objects?

gRPC newbie. I have a simple api:
Customer getCustomer(int id)
List<Customer> getCustomers()
So my proto looks like this:
message ListCustomersResponse {
repeated Customer customer = 1;
}
rpc ListCustomers (google.protobuf.Empty) returns (ListCustomersResponse);
rpc GetCustomer (GetCustomerRequest) returns (Customer);
I was trying to follow Googles lead on the style. Originally I had returns (stream Customer) for GetCustomers, but Google seems to favor the ListxxxResponse style. When I generate the code, it ends up being:
public void getCustomers(com.google.protobuf.Empty request,
StreamObserver<ListCustomersResponse> responseObserver) {
vs:
public void getCustomers(com.google.protobuf.Empty request,
StreamObserver<Customer> responseObserver) {
Am I missing something? Why would I want to go through the hassle of creating a ListCustomersResponse when I can just do stream Customer and get the streaming functionality?
The ListCustomersResponse is just streaming the whole list at once vs streaming each customer. Googles preference seems to be to return the ListCustomersResponse style all of the time.
When is it appropriate to use the ListxxxResponse vs the stream response?
This question is hard to answer without knowing what reference you're using. It's possible there's a miscommunication, or that the reference is simply wrong.
If you're looking at the gRPC Basics tutorial though, then I might have an inkling as to what caused a miscommunication. If that's indeed your reference, then it does not recommend returning repeated fields for streamed responses; your intuition is correct: you would just want to stream the singular Customer.
Here is what it says (screenshot intentional):
You might be reading rpc ListFeatures(Rectangle) as meaning an endpoint that returns a list [noun] of features. If so, that's a miscommunication. The guide actually means an endpoint to list [verb] features. It would have been less confusing if they just wrote rpc GetFeatures(Rectangle).
So, your proto should look more like this,
rpc GetCustomers (google.protobuf.Empty) returns (stream Customer);
rpc GetCustomer (GetCustomerRequest) returns (Customer);
generating exactly what you suspected made more sense.
Update
Ah I see, so you're looking at this example in googleapis:
// Lists shelves. The order is unspecified but deterministic. Newly created
// shelves will not necessarily be added to the end of this list.
rpc ListShelves(ListShelvesRequest) returns (ListShelvesResponse) {
option (google.api.http) = {
get: "/v1/shelves"
};
}
...
// Response message for LibraryService.ListShelves.
message ListShelvesResponse {
// The list of shelves.
repeated Shelf shelves = 1;
// A token to retrieve next page of results.
// Pass this value in the
// [ListShelvesRequest.page_token][google.example.library.v1.ListShelvesRequest.page_token]
// field in the subsequent call to `ListShelves` method to retrieve the next
// page of results.
string next_page_token = 2;
}
Yeah, I think you've probably figured the same by now, but here they have chosen to use a simple RPC, as opposed to a server-side streaming RPC (see here). I emphasize this because, I think the important choice is not the stylistic difference between repeated versus stream, but rather the difference between a simple request-response API versus a more complex and less-ubiquitous streaming API.
In the googleapis example above, they're defining an API that returns a fixed and static number of items per page, e.g. 10 or 50. It would simply be overcomplicated to use streaming for this, when pagination is already so well-understood and prevalent in software architecture and REST APIs. I think that is what they should have said, rather than "a small number." So the complexity of streaming (and learning cost to you and future maintainers) has to justified, that's all. Suppose you're actually fetching thousands of (x, y, z) items for a Point Cloud or you're creating a live-updating bid-ask visualizer for some cryptocurrency, e.g.
Then you'd start asking yourself, "Is a simple request-response API my best option here?" So it just tends to be that, the larger the number of items needing to be returned, the more streaming APIs start to make sense. And that can be for conceptual reasons, e.g. the items are a live-updating stream in time like the above crypto example, or architectural, e.g. it would be more efficient to start displaying results in the UI as partial data streams back. I think the "small number" thing you read was an oversimplification.

Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter)

Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter),i want to know why design it in such way.I just curious about it.
void QObject::installEventFilter(QObject *obj)
{
Q_D(QObject);
if (!obj)
return;
if (d->threadData != obj->d_func()->threadData) {
qWarning("QObject::installEventFilter(): Cannot filter events for objects in a different thread.");
return;
}
// clean up unused items in the list
d->eventFilters.removeAll((QObject*)0);
d->eventFilters.removeAll(obj);
d->eventFilters.prepend(obj);
}
It's done that way because the most recently installed event filter is to be processed first, i.e. it needs to be at the beginning of the filter list. The filters are invoked by traversing the list in sequential order from begin() to end().
The most recently installed filter is to be processed first because the only two simple choices are to either process it first or last. And the second choice is not useful: when you filter events, you want to decide what happens before anyone else does. Well, but then some new user's filter will go before yours, so how that can be? As follows: event filters are used to amend functionality - functionality that already exists. If you added a filter somewhere inside the existing functionality, you'd effectively be interfacing to a partially defined system, with unknown behavior. After all, even Qt's implementation uses event filters. They provide the documented behavior. By inserting your event filter last, you couldn't be sure at all what events it will see - it'd all depend on implementation details of every layer of functionality above your filter.
A system with some event filter installed is like a layer of skin on the onion - the user of that system only sees the skin, not what's inside, not the implementation. But they can add their own skin on top if they wish so, and implement new functionality that way. They can't dig into the onion, because they don't know what's in it. Of course that's a generalization: they don't know because it doesn't form an API, a contract between them and the implementation of the system. They are free to read the source code and/or reverse engineer the system, and then insert the event filter anywhere in the list they wish. After all, once you get access to QObjectPrivate, you can modify the event filter list as you wish. But then you're responsible for the behavior of not only what you added on top of the public API, but of many of the underlying layers too - and your responsibility broadens. Updating the toolkit becomes next to impossible, because you'd have to audit the code and/or verify test coverage to make sure that something somewhere in the internals didn't get broken.

Firebase update on disconnect

I have a node on firebase that lists all the players in the game. This list will update as and when new players join. And when the current user ( me ) disconnects, I would like to remove myself from the list.
As the list will change over time, at the moment I disconnect, I would like to update this list and update firebase.
This is the way I am thinking of doing it, but it doesn't work as .update doesnt accept a function. Only the object. But if I create the object beforehand, when .onDisconnect calls, it will not be the latest object... How should I go about doing this?
payload.onDisconnect().update( () => {
const withoutMe = state.roomObj
const index = withoutMe.players.indexOf( state.userObj.name )
if ( index > -1 ) {
withoutMe.players.splice( index, 1 )
}
return withoutMe
})
The onDisconnect handler was made for this use-case. But it requires that the data of the write operation is known at the time that you set the onDisconnect. If you think about it, this should make sense: since the onDisconnect happens after your client is disconnected, the data of the data of that write operation must be known before the disconnect.
It sounds like you're building a so-called presence system: a list that contains a node for each user that is currently online. The Firebase documentation has an example of such a presence system. The key difference from your approach is that it in the documentation each user only modifies their own node.
So: when the user comes online, they write a node for themselves. And then when they get disconnected, that node gets removed. Since all users write their node under the same parent, that parent will reflect the users that are online.
The actual implementation is a bit more involved since it deals with some edge cases too. So I recommend you check out the code in the documentation I linked, and use that as the basis for your own similar system.

What the difference between JobLockService.getLock() & JobLockService.getTransactionLock()

What the difference between JobLockService.getLock() & JobLockService.getTransactionLock() ? from practical perspective and theoretical perspective ?
Thanks
Mohammed Amr
Senior System Developer
Digital Series Co,
Have a look at the two methods signatures:
java.lang.String getLock(org.alfresco.service.namespace.QName lockQName,
long timeToLive)
Returns a String, which is the newly created LockToken. You must use the token in following calls to refreshLock or releaseLock in order to manually manage the lock life span.
void getTransactionalLock(org.alfresco.service.namespace.QName lockQName,
long timeToLive)
void method, only asks for a QName. The same thread, or other threads, can call this method to try to acquire the lock. Following calls to getTransactionalLock will automatically try to refresh the lock in case it's available/expired, without the need to pass the token around.

Statefinalization/initialization activity only runs on leaf states

I am trying to get my Windows State Machine workflow to communicate with end users. The general pattern I am trying to implement within a StateActivity is:
StateInitializationActivity: Send a message to user requesting an answer to a question (e.g. "Do you approve this document?"), together with the context for...
...EventDrivenActivity: Deal with answer sent by user
StateFinalizationActivity: Cancel message (e.g. document is withdrawn and no longer needs approval)
This all works fine if the StateActivity is a "Leaf State" (i.e. has no child states). However, it does not work if I want to use recursive composition of states. For non-leaf states, StateInitialization and StateFinalization do not run (I confirmed this behaviour by using Reflector to inspect the StateActivity source code). The EventDrivenActivity is still listening, but the end user doesn't know what's going on.
For StateInitialization, I thought that one way to work around this would be to replace it with an EventDrivenActivity and a zero-delay timer. I'm stuck with what to do about StateFinalization.
So - does anyone have any ideas about how to get a State Finalization Activity to always run, even for non-leaf states?
Its unfortunate that the structure of "nested states" is one of a "parent" containing "children", the designer UI re-enforces this concept. Hence its quite natural and intuative to think the way you are thinking. Its unfortunate because its wrong.
The true relationship is one of "General" -> "Specific". Its in effect a hierachical class structure. Consider a much more familar such relationship:-
public class MySuperClass
{
public MySuperClass(object parameter) { }
protected void DoSomething() { }
}
public class MySubClass : MySuperClass
{
protected void DoSomethingElse() { }
}
Here MySubClass inherits DoSomething from SuperClass. The above though is broken because the SuperClass doesn't have a default constructor. Also parameterised constructor of SuperClass is not inherited by SubClass. In fact logically a sub-class never inherits the constructors (or destructors) of the super-class. (Yes there is some magic wiring up default constructors but thats more sugar than substance).
Similarly the relationship between StateAcivities contained with another StateActivity is actually that the contained activity is a specialisation of the container. Each contained activity inherits the set of event driven activities of the container. However, each contained StateActivity is a first class discrete state in the workflow same as any other state.
The containing activity actual becomes an abstract, it can not be transitioned to and importantly there is no real concept of transition to a state "inside" another state. By extension then there is no concept of leaving such an outer state either. As a result there is no initialization or finalization of the containing StateActivity.
A quirk of the designer allows you to add a StateInitialization and StateFinalization then add StateActivities to a state. If you try it the other way round the designer won't let you because it knows the Initialization and Finalization will never be run.
I realise this doesn't actually answer your question and I'm loath to say in this case "It can't be done" but if it can it will be a little hacky.
OK, so here’s what I decided to do in the end. I created a custom tracking service which looks for activity events corresponding to entering or leaving the states which are involved in communication with end users. This service enters decisions for the user into a database when the state is entered and removes them when the state is left. The user can query the database to see what decisions the workflow is waiting on. The workflow listens for user responses using a ReceiveActivity in an EventDrivenActivity. This also works for decisions in parent ‘superstates’. This might not be exactly what a "Tracking Service" is meant to be for, but it seems to work
I've thought of another way of solving the problem. Originally, I had in mind that for communications I would use the WCF-integrated SendActivity and ReceiveActivity provided in WF 3.5.
However, in the end I came to the conclusion that it's easier to ignore these activities and implement your own IEventActivity with a local service. IEventActivity.Subscribe can be used to indicate to users that there is a question for them to answer and IEventActivity.Unsubscribe can be used to cancel the question. This means that separate activities in the State's inialization and finalization blocks are not required. The message routing is done manually using workflow queues and the user's response is added to the queue with appropriate name. I used Guid's for the queue names, and these are passed to the user during the IEventActivity.Subscribe call.
I used the 'File System Watcher' example in MSDN to work out how to do this.
I also found this article very insructive: http://www.infoq.com/articles/lublinksy-workqueue-mgr

Resources