REST API streaming versus repeated GET requests - firebase

I'm making a turn based game kind of like multiplayer checkers that works with firebase realtime database, so each client needs to know when moves are made.
I'm limited by third party framework that only allows REST API requests, but doesn't allow REST API streaming because there is no way to "Set the client's Accept header to text/event-stream" or "Respect HTTP Redirects, in particular HTTP status code 307".
So, I'm thinking of reading the database with GET requests every second to see if there is new data, but I'm worried that this could be inefficient in terms of data and cause a large bill. How much worse is this solution than a REST API streaming one and is it practical?

Since in multiplayer games response time is very critical, I think you should think about how this may be inefficient in terms of user experience. But of course that will depend on how the game works.
But if you think it is ok users to have 1000ms delay, then the question is how much players will be playing the game daily, how long does each game take to finish (turn wise).
((avg. turns per game) * (avg .# of players in a single game)) * (games played per day) will be the minimum reads for only the game play part. Also you must consider if you will have to constantly check multiple documents. Probably there will be many writes also reads on the other parts of the game.
So I think overall, it is very inefficient way to solve this problem in many ways.
What is the platform you are using? Maybe someone could find a way around somehow.

Firebase provides callback listeners for requests. You can attach ChildEventListener to your request to track real time changes in your database. As long as it is connected it will be considered a single request.
Refer to this link

Related

Realtime Multiplayer Game with firebase

I just created a mobile game with flutter and would like to extend that to a multiplayer online game. So I thought about making it with firebase. I'm rather new to this domain so I hope my questions don't sound too stupid.
As the characters in my game move around fluidly I need to update their positions at least once per frame, so in case of using firebase I would need a cloud function that runs like 60 times per seconds and never ends. This function would then take all the objects in the game (which are stored in the realtime database) and update their positions according to their speed. The updated values are written back to the database to which the clients listen. They update the values in the game and render the objects at the correct positions. Whenever a player interacts with the game somehow another cloud function is called to handle that and update whatever is needed.
Is it possible to make something like this and does it make sense? It does not sound very smooth to me using cloud functions in this way but I don't have another idea of implementing it. And how would it affect the costs? Sound like a lot of function invocations and db reads/writes..
If this is totally unthinkable what could be alternatives? I wouldn't mind too much using another db and or framework to make the game itself.
Thank you very much
Communicating positions etc through a database won't be fast enough for you to do it 60 times per second. You'll have to use some form of mixed UDP and TCP solution and unfortunately that doesn't exist pre-built for Flame like it does for some game-engines, but it might not be too complicated to build one yourself depending on how complex your use-case is.
Like #Fattie suggests, it could be a good idea to try out if PubNub works for you.
Update: There is now an early stages Nakama (Open source game server) integration for Flutter here.
Firebase has utterly no connection to what you are trying to do.
You just use Unity and one of the realtime-multiplayer services - like Photon.
It is inconceivable that as a hobbyist you'd start from scratch and try to write realtime, interframe, predictive quat. networking!
In five minutes ...
https://doc.photonengine.com/en-us/pun/v2/demos-and-tutorials/pun-basics-tutorial/intro
It takes 5 minutes to download the Photon/Unity demo and be playing a realtime multiuser game with robots running around.
You then just learn how to add guns or whatever and add your own models.
Flame ...
Flame is spectacular BTW. To do a R/T MP demo with Flutter/Flame, I'd suggest just using PubNub, which would take no time.
http://pubnub.com/developers/demos/codoodler (all of the demos are terrific). It's the world's major realtime backbone, nothing is faster, they do Flutter so - that's about it.

Managing a connection to an external component in an Aggregate

I just discovered Axon framework and am very eager to use it as it answers a lot of my questions and concerns about DDD.
The first project I'd like to use it on contains small cameras which are controlled via the GigEVision protocol (over TCP and UDP for the control and stream channels). I think my problem would be the same for any case where we maintain a connection to an external component or more generally we want to link an external component lifecycle to Axon's lifecycles.
I'd like to have an Aggregate named Camera to which I can send Commands to grab 1 image or start grabbing N images at a certain FPS.
What I'm not sure about is how to manage the connection to an external component in my Aggregate.
- Should I inject the client to my camera in my Camera Aggregate and consider connecting to it as part of my protocol / business commands? In this case how would I link the camera lifecycle (a camera get disconnected all of a sudden) to the aggregate lifecyle (create a corresponding CameraDisconnectedEvent)?
- Should the connection be handled in a side car Saga which get the camera client injected, the saga starting on ConnectionRequestedEvent and stopping as soon as we get a connection error from the camera. I would get the same issue of linking the connection lifecycle to the lifecycle of the Saga I think.
- Am I leaking implementation details in the business layer and should manage the issue an other way?
- Am I just using the wrong tool for this job and should not try to force it into Axon?
Thank you very much in advance, hope my message and issues make sense.
Best regards,
First and foremost what you should do, is ensure the language spoken by the GigEVision protocol by no means transitions over into your other domain.
These two should be separate and remain so, as they cover different concerns.
This brings to light the necessity to have a translation layer of some sort.
More formuly called a context mapping. From a DDD perspective, you would take this even further by talking about an Anti-Corruption Layer.
The name already says it, you add a layer to ensure you are not corrupting your domain logic with that from another domain.
Another useful topic to read up on here would be the notion of a Bounded Context.
I digress though, let's go back to the problem at hand: where to position this anti-corruption layer.
What is currently unclear to me, is what domain requirements are in place why the connection is required to be maintained all the time when requesting for images.
Is the command you want to send requesting for a live feed? Or just "some" images from a given time frame?
In both scenarios I am not immediately convinced that any of these operations requires the validation through a single Camera aggregate to be honest.
You could still model this in a command and event format, as the messaging paradigm is very helpful to allow clean segregation of concerns.
But given the current description, I am uncertain whether you need DDD's idea of an Aggregate to model a single Aggregate in.
I might be wrong on this note, but I just don't know enough about your domain at this stage.
That's my two cents to the situation, hoping this helps!

Firebase Fan out - the most cost effective way?

I know this issue may have been raised multiple times but I have read on most of the questions available but did not found any that can exactly help to answer my question. As proposed by the Firebase team the fan out technique is the recommended way to ensure fast data read, but with the cost of data duplication. I know this question is subjective and depends on the application, but which is the best solution in terms of cost saving($) and data read?
Post same node in multiple child (save data read only called once,
but have redundant, so consume more Firebase storage) (see image Firebase Database - the "Fan Out" technique)
Post only one node, and other reference to the node by its key (not redundant and consume less Firbase storage, but need to read twice - get the key, and get the node for the key) (see image https://stackoverflow.com/a/38215398/1423345)
For context, I am building a non profit marketplace app, so I need to apply the best solution in terms of balancing both between cost saving ($) and fast data read.
On the other hand, read twice (bandwidth) vs bigger storage? Which one is more cost effective?
I would start by saying that ideally in Firebase you read or sync only what's necessary. So your database queries are coupled by other filters to make the query as specific as possible. If you can nail that then you will anyway build a very intelligent data structure which will be cost effective.
Now the real debate Fan - Out technique or just post reference to the nodes. As I personally prefer Fan-Out and also use it successfully so I will answer in reference to that technique only which will also give you indications of the reason that make me not wanna use keeping a reference and all.
First and foremost thing is end-user experience and performance. Which comes in the form of the Big Data Chunk Synchronization. Well in general it means that instead of downloading small chunks you aim for the biggest possible so that you reduce High Cell radio usage, High Battery Drain, High bandwidth and also keep the app updated and in sync as fast as possible.
If you aim for that kind of app performance then you clearly see that Fan-Out is the clear winner over other technique due to following reasons.
You download A Big Data Chunk stored in other node which doesn't let your cell radio stay on for long.
As you download whole info at once, your app performs better than others. Obviously by whole I don't mean that you should download full database. It's all about that smart balance which makes you download just what is required in first go.
It's not that this is the only technique which will give you faster reads and better data structure. There are other techniques like indexing, data validation and security rules which are equally important. All coupled up properly with correct data structure will give you far better performance.
In a situation where you have just a reference to other node and not actual data, then you might end up in a situation where you don't actually have anything to show to your users. Let's say your users aren't getting good connectivity so after one read which gave you just the reference, the network falls. So till the network is up again your users don't see anything and trust me that is a very bad situation for the app. Your aim as a developer should be to reduce the chances of those situations
So, I would recommend you to go for FAN - OUT technique as it is faster and cost effective when you see other factors like data filtering, indexing and security rules as well. Yes it comes with a slight price of high storage usage. But what does a less storage mean when you don't have happy users ? Still it all comes down to personal preference. But I have shared my experience and thoughts hope it helps you make right decision.
I would encourage you to got through this and have a more deeper understanding of no SQL Data modelling
Do let me know if this info helped you.

How to hack proof a data submission program

I am writing a score submission system for games where I need to ensure that reports back to the server are not falsified (aka, hacked).
I know that I can store a password or private passkey in the program to authenticate or encrypt the request but if the program is decompiled, a crafty hacker can extract the password/passkey and use it to falsify reports.
Does a perfect solution exist?
Thanks in advance.
No. All you can do is make it difficult for cheaters.
You don't say what environment you're running on, but it sounds like you're trying to solve a code authentication problem*: knowing that the code that is executing is actually what you think it is. This is a problem that has plagued online games forever and does not have a good solution.
Common ways in which such systems are commonly broken:
Capture, modification and replay of submissions to the server
Modifying the binary to allow cheating
Using a debugger to modify the submission in-memory before the program applies signatures/encryption/whatever
Punkbuster is an example of a system which attempts to solve some of these problems: http://en.wikipedia.org/wiki/PunkBuster
Also consider http://en.wikipedia.org/wiki/Cheating_in_online_games
Chances are, this is probably too hard for your game. Hiding a public key in your binary and signing everything that leaves it will probably put you well ahead of the pack, security-wise.
* Apologies, I don't actually remember what the formal name for this is. I keep thinking "running code authentication", but Google comes up with nothing for the term.
There is one thing you can do - record all of the user inputs and send those to the server as part of the submission. The server can then replay the inputs through a local copy of the game engine to determine the score. Obviously this isn't appropriate for every type of game, though. Depending on the game, you may need to include replay protection.
Another method that may be appropriate for some types of games is to include a video recording of the high-scoring play within the submission. Provide links to the videos from the high score table, along with a link to report suspicious entries. This will let you "crowd-source" cheat detection - if a cheater's score hits the table at number 1, then the players behind scores 2 through 10 have a pretty big incentive to validate the video for you. If a score is reported enough times, you can check the video yourself and decide if it should be removed (and the user banned).

How to reduce processing time on a web form

I have a webform with quite a few fields (between 15 to 40, based on user options). When user ends filling the form, I block it with jQuery.blockUI, and then on Server Side I process the form, packing it on an xml and call a new page. But transition between pages usually takes about 1 or 2 seconds, and I want to reduce it.
It's possible to make all processing on the next page, as the data is then send to external web services and wait for a response. That takes up to 2 minutes, thus 1 or 2 seconds are less to notice there.
So, Is there a simple way to make all data processing, and still reduce transition time?
Thanks in advance
UPDATE: I'm pretty sure that would be the better aproach. But right know time is top priority, and I'm convinced that I know the bottle neck and have no little idea as how to solve or speed up the parsing of the data to an xml that has nearly 200 fields (about 50 come from the form, rest from queries or code).
On a side note, that 2 secs are come not only from data parsing, but from our slow out conection on the development server, and connection speed on Spain in general. I'm 80% sure that it won't be as slow on the production server, but don't want to run the risk of asuming that nothing can be speed up.
Then, the couple of minutes querying external web services is out of my hands. It contacts a provider's webservice that links to a couple of Car Insurance companies, that get the data and throw out a list of insurance ¿prices? (sorry, don't know the correct word). And as this is lost time I think I can hide that two seconds of XML construction here.
The only thing I don't know is how to send form values from the Form to the Results page, that loads the data with Ajax.
I think you need to focus on why it takes so long processing 40 fields. What are the potential bottlenecks on the backend? What queries are you performing that take so long? If you can reduce the processing time to less than 10 seconds you can get away with your page handling the processing otherwise you need a different architecture like REST or NServiceBus to off load the long running execution and somehow notify the client that you are done.
You could try to do the processing in a different thread. Just take in the string, spin of the thread and return the result. Unfortunately thread programming doesn't qualify as "simple". Btw typically now is perceived as anything below 3 sec.
I re-read your question and sorry for not thinking of asking first: do you have to parse the form back to XML? Is it possible to serialize your data to JSON, pass it up to the server, de-serialize and make the web request? The JSON format is much "lighter" than XML and you serialize and de-serialize with a library such as JSON.Net. This should eliminate some of your processing overhead.
With respects to the web service you call, is the data new on each request? Is there anyway of requesting less data or storing portions of the data and refreshing periodically? Potentially you could run a messaging server such as MSMQ and refresh your data on a scheduled basis and then only request what you need once you have the user specific data. 30 seconds is 30 seconds.
I keep thinking about the data - you say you have over 200 fields. I am unclear as to whether you have to perform queries or calculations. If you have numerous records, have you considered a different type of schema that might make your retrievals faster? Can you pull static lookups into a shared memory so you don't have to hit the disk?

Resources