Distributed C++ game server which use database - distributed-database

My C++ turn-based game server (which uses database) does not stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases).
Is there some tutorials/books/common standards which explain how to do it in a best way?

The way you put the database into the picture might be misleading: clustering solutions exist for all of the mostly used RDBMS, so that if you need to support your DB activities with more than one DB node you will just have to check the documentation from your DB vendor.
More complex scenarios are there when it comes to synchronize your non-DB application state that needs to be shared among several servers. There are already a number of questions here that tackle the same problem, like here or here
You might also be interested into some messaging system, I heard good things about ZeroMQ
Hope this helps.

Related

Is CouchDb suitable for use on client desktops (Windows 10)?

I want to support the users of my Windows 10 desktop application with:
Local Data (not having to perform a fetch from the cloud every time they want new data)
Offline support
Replication with a cloud database
There can potentially be multiple users (in the order of 10-100 but not 1000) simultaneously editing the same database. I would run CouchDb as a service (ie. in a separate process to my app).
To achieve the above I am considering installing CouchDb on each client desktop PC (all replicating to a single main cloud CouchDb instance) together with my application to achieve the above goals.
One of the reasons I am pursuing this line of thinking is that it would allow my application code to mostly just be written in a manner that interacts with local data and the sync/replication (which is probably quite complex) can be taken care of by CouchDb.
I am using CouchDb as a replacement for something that I often see is done by sqlite but I really want the replication ability of CouchDb (which sqlite does not have).
Is the above a scenario in which I can expect CouchDb to perform well or is there something that I am not considering?
We have multiple clients doing this successfully. This is a recommended use-case for CouchDB. The limiting factor will be the cloud vm configuration, but 100s to 1000s of clients should be no problem on a decent vm setup.
CouchDB is sometimes called "A replication protocol with a database attached to it." This sounds appropriate for your use case. However, installing CouchDB (or any external database/service), particularly as part of a client application, isn't necessarily trivial. That doesn't mean it should be avoided--it's just complexity to consider when making a choice.
You might consider lighter-weight options for your client. PouchDB is the go-to solution for web apps, and often mobile apps, as it syncs with CouchDB, but has much lower resource requirements (at the cost of not being multi-tenant). For a desktop app, that may be appropriate.
Ultimately, whether CouchDB (or PouchDB, or PouchDB Server, etc) is appropriate for your use case depends on which trade-offs you're willing to make.

Atomic swap in a cross compatibility zone setup

I am pretty new to corda and I am curious if it is possible to do a cross compatibility zone DvP. According to https://www.corda.net/2017/08/compatibility-and-upgrades/ it is possible to have different corda newtorks in a global network.
My question addresses following use case:
let's say I have two corda networks (compatibility zones). Each network has its own notary, nodes, customers & KYC process and is supporting a certain asset.
The first network provides for example a payment infrastructure and the second network a securities network.
Is it possible to do that by using R3 corda, if yes is there any example/tutorial?
Thanks in advance for any support!
The answer is yes but I think we're talking at cross-purposes :) Networks operated and governed by different entities are intended to form and operate WITHIN a compatibility zone.
The way I think it's most helpful to think of Compatibility Zones is to imagine the concept just doesn't exist... imagine there was just ONE Corda network (ie CZ) that everybody used (that was transparently/openly governed so no one firm/group of firms controlled it)... and then all the different apps and business networks existed within it... able to interoperate and transact across each other, because their nodes were compatible... they would understand and accept each other's transactions, etc.
Think about it from the perspective of a firm installing a blockchain node: getting onto any blockchain network (a Corda CZ or whatever the equivalent concept is for other platforms)... getting an identity, punching the right holes in the firewall, setting up the node infrastructure... it's analogous to the work needed to get a firm "on the internet" - setting up routers, getting IP addresses, etc, etc.
It's the kind of thing you want to do once and then reuse ruthlessly. The idea that you would have to connect to an entirely new communications network for each app your firm used would be ludicrous. And yet that's how some people seem to think blockchain deployments should be: ie for each app, you set up a separate blockchain network with its own nodes and settings and identity layer and consensus providers. But that's surely just nonsense, right?
You want to connect to a global network once and then reuse that infrastructure.
So the idea is that we try to have as few CZs as possible and encourage as many business networks as possible to form within that small number of CZs.
I know this can mess with your mind when you first hear about it because all the other enterprise blockchain platforms are going in totally the wrong direction (in my opinion..!) They seem to be encouraging the formation of a separate private network for each application. But that just seems crazy to me.
So maybe try this: even if you think I'm mad, play along with the idea for a day or so and see if it begins to grow on you :) If not, let's debate it again but I really do think this idea of multiple apps on the same overall shared network (ie multiple business networks in a single compatibility zone) is just so amazingly powerful as a concept.
So to your answer: can you do cross-app/cross-business-network DvP within a CZ? Yes! That is one of the key use-cases we invented Corda to solve... it's almost perfect for those sorts of scenarios.
Could you do it if the two apps were on different CZs? Well, yes... but it would be like asking if you could do DvP between assets managed in different databases or hosted on different blockchains.. it's just messier... needing locking and 2PC and all the stuff that we can just eliminate if we hold ourselves accountable for not creating needless balkanisation/siloed deployment through deployment of standalone networks unless they're really, really needed.

Photon Server vs Dedicated Master Client with PUN

(Sorry for long looking post, It is actually really short in context)
Hello, I am new in network programming and concepts. However I have worked with PUN before, several times. I am familiar with PUN way of synchronizing things. RPC and Serialization etc.
I want to achieve a full-authoritative and future scalable server architecture that works in dedicated servers and manages room/lobby services for clients. The server architecture that I am planing is, similar to the games like Rust/ARK/Hurtworld etc. However the game will consist 7v7 matches, (not 100ish like them, 15 max), but will also have mechanics that alters the world. (like building something to world or altering terrain by mining etc). Gameplay would resemble Rust in visuals, but the mechanics would differ. However server architecture should be very similar to those games. Meaning it won't be Peer to Peer.
To start: "I know" that I have to use Photon Server for these type of things. So I can code server side logic right? The authoritative architecture, the persistent world, user&world databases, server management all these should be done in the server side if I am not mistaken. However as i said in the beginning I have only worked with PUN before, and only client-server architecture I know is that: one trusted client (user) hosts the game and others joins to him.
Now. I can learn Photon Server and server side programming. (hell, even started learning it now. It is kinda similar how PUN does it's work. Operations and Events instead of RPC's etc.) However, to create a sustained server architecture, I have to learn and practice it in depth. Meaning: I need profound knowledge to create a commercial product server.
So for the initial prototyping phase of the game, I don't want to focus my workload to learning and creating the Photon Server. Instead I have come up with an idea.
What if I create Unity instances in dedicated servers, that creates(hosts) rooms with PUN (cloud). And other players around the world join to these hosted matches? Dedicated servers would simply be "non-player" master clients that hosts the games and let other players join to games. And also does server stuff by itself.
So, I would create a standalone "server-only" Unity project file, that can manage servers, manage databases, create and sustain persistent worlds. All the features that won't be in user clients, can be in the server(host/master) client's project. And I can manage all these in the dedicated server inself.
So to clarify: A "special" unity project that runs the game simulation, manage world, manage database, manage players, manage everything and run everything, will be the master client. Will host the game. And "client" unity projects will join to this game, and will "send" inputs for authoritative architecture. Server will "accept" these inputs, simulate it and send it back to clients.
To more clarify: Everything that Photon Server should be handling will be handled by PUN and Cloud relay servers, RPC's and Serialize funtions. So I wont hustle my workload with the server programming. Win-Win. Right?
I know, It is probably not the best solution out there. I haven't found any single document/topic about it. So It may be a really stupid idea to begin with. But I wonder. If I start with this type of architecture, when I get pass the prototype phase, would it be easy to implement Photon Server to server side? Would it be easy and work efficient to follow this approach? Or should I stop whatever I am doing, and start learning Photon server?
if you need to prototype you may do it in whatever way is convinient for you. but if you start to develop real product this is not a solution at all.
in order to make live of developers easier we introduced plugins. You may try to start with them. if at some point you see that this is not enough for you, you may extend PhotonServer.
And last thing, for sure we (PhotonTeam) and community will help you either here, or on our forum.
Best,
Photonians

What is ZeroMQ underlying design architecture

I am comparatively new to ZeroMQ and would like some suggestions regarding its it's internal architecture.
I am planning to use ZeroMQ as a messaging framework for my work. The basic idea what I want to achieve is to be able to dynamically scale the infrastructure based on the load and computational capacity required to achieve a particular workflow deadlines.
So,if if there is a necessity to add more nodes, then the application spawns new nodes and the messaging framework should be able to incorporate the changes as well. I should also be able to be point where the additional computations should occur or how the framework dynamically adds the new nodes (if any). The event on a particular node decides subsequent actions to be performed on other nodes. Here is my scenario or my stack that I am thinking off, but wanted to know if it makes sense:
User applications
ZeroMQ messaging
Squid-Content based routing
Overlay
Physical Substrate
I am bit skeptical about the above stack as I believe ZeroMQ helps one to achieve most of the functionality and therby thereby making it simpler.
Few points about my stack:
Physical substrate are the total number of nodes that are available for the computations or as data sources.
Overlay is a logical network that is built dynamically upon the physical network based upon the closest nodes available for a particular workflow. i.e. if two nodes exchange data frequently, then those two nodes are placed logically close to one another. Is a separate overlay like CHORD etc required when we use ZeroMQ?
Squid is basically used for content based routing. Is Squid required when we use ZeroMQ?
ZeroMQ messaging is for the communication between different nodes for an application.
Basically, what I wanted to know is whether above stack can be made simpler given that ZeroMQ has richer functionalities. If so, can someone point or share the thoughts. I am however going through the documentations of ZeroMQ, I am finding it a bit difficult to understand the intrinsic design of ZeroMQ. Please help.
Thanks
There's so much specific to your use-case here that it's almost impossible to give any definite answers. ZeroMQ is not a direct replacement for the concepts you've built into your architecture, however it may meet the goals you're trying to meet depending on how you're using them.
My suggestion would be to put your current architecture aside and start trying to build up a new one with ZMQ as its core, and see where you run into limitations that are solved by the other parts of your stack.
As for the "intrinsic design" of ZMQ, here's the basics that you need to understand as a starting point:
A ZMQ socket handles connection details for you, including managing network hiccups - but this has limits that you'll need to know
There are different kinds of ZMQ sockets, and they have opinions about how you use them. Some of them communicate asynchronously, some of them are strictly synchronous, some are one way, some are bi-directional.
If a connection between two sockets is severed (e.g. one node goes down, there is a network failure - something more than a momentary hiccup), it's your job to recognize that and re-establish that connection
There is no built in brokering or topology, you have to design and build that all yourself.
... ultimately, ZMQ provides a toolset for you to build a messaging framework, it does not provide a fully realized messaging framework out of the box. So, yes, it has the power to replace some of the other tools you're currently using, but you'll have to build it.

Peer-to-peer replication of a sqlite database

I am looking for a way to replicate a small and simple relational database (like SQLite) across peers. This should work in an environment with unstable network connections, hence the need for each peer to have a full copy of the database. This should allow a peer to continue working off-line in the event of network failure.
To keep things simple, replication should only have to support the replication of addition of data, i.e. only INSERTs, not DELETEs or UPDATEs.
Does anyone know of a good - and ideally cross-platform - technology or method of creating such a system? I am currently looking at JXTA and JXSE, but I am put off by its complexity and apparant lack of life in its community after the takeover of Sun by Oracle.
Thanks!
Frans
rqlite uses the raft consensus algorithm, so it should be fairly resilient to unstable network connection.
Also, it seems to be possible to configure rqlite to accept reads even in the case of a network failure.
A similar project, dqlite, exists as a library, available in various languages, but it seems less explicit about the event of a network failure.
You may want to explore JGroups for the communication layer if you don't like JXTA. For the replication, I think you will have to implement your own code.
I am working on something similar (though the code is far from ready). I'll describe a little about my intended approach, but whether that is suitable for you depends on some key design points you'd need to consider. I am not aware of any ready-built projects that will do this, unfortunately.
In particular we'd need to know what language you wish to use, or which languages you'd rather avoid.
Also, consider how you intend to do peer dicovery - can you set up trust between node pairs manually, or do you want them to auto-discover?
Presumably all peers may insert data?
If you are able to use PHP, and are happy manually peering node pairs, then my approach may be of interest. Set up an ORM such as Doctrine, Propel or NotORM, and get each node to regularly sync with an internet time source. For each new row in a db, grab the data (either in an array or ORM object), serialise it, and push it out to all nodes that you have a trust relationship with. Where a push fails, keep a note of this and retry at periodic intervals (potentially giving up after a remote node fails to answer a large number of retries).
Pushes can either be kicked off by your application that creates the row, or can be called by whatever scheduler is available on each machine. A push message can be XML, or for simplicity can be just a POST message containing the new row and whatever metadata (e.g. timestamp of save, so as to resolve INSERT order from several nodes).
If your nodes do not have static IP addresses, they could be registered with a dynamic DNS addressing service so as to allow each node to stay in touch with peers even if their IP changes. You might also consider adding a message signing system, to ensure that messages between nodes are genuine.

Resources