Setting up a multiplayer rpg map in Unity - networking

I am wanting to make a small online based RPG game in Unity, something very small and nothing special just to learn the ropes with Networking. My question is from a design perspective, should I create all my towns, hunting areas, dungeons and pretty much anything that would of been its own scene in a single-player game in 1 gigantic scene and just position them in a way that they are no where near each other?
Would this be "over the top" for a network to handle having everything running even if there are no players in said area like DungeonA? Would what I put up top be correct if I had some markers in place that would set certain areas like DungeonA inactive if there are no players in DungeonA?
I hate for this question to come off as a "How you do it globally for all network games in Unity" but if anyone has any experience in something such as this please let me know if you can what works really well.
Thanks!

Your approach will depend on how well you can break down the world you'd like to create into small environmental components. For an interconnected system of dungeons, buildings, and cities (Think older Zelda games) separate scenes are appropriate. You can load/unload between each one since you don't need to see what's in a place until you go to that place. For a more open-world style (think Skyrim or Ghost Recon: Wildlands), Unity's concept of Scenes breaks down. The "Scene" paradigm employed by Unity assumes that a "Scene" is meant to contain some confined, definite world space. From the Unity Manual:
Think of each unique Scene file as a unique level. In each Scene, you
will place your environments, obstacles, and decorations, essentially
designing and building your game in pieces.
When you load a Scene object, you load everything in that Scene and this makes it inappropriate for an open-world experience. The correct way to go about this issue (by "correct" I mean the only way it'll be performant) would be to dynamically load the assets/environment close to the player. Your in-game "Scene" (as far as the .unity asset is concerned) would be void of all environmental objects and assets and load in a manner similar to this:
A LoadScene occurs and the player enters the game world via a menu (or whatever the face of your game looks like).
The last location of the player is retrieved (or a spawn point is determined).
Environment geometry and assets within a certain distance are loaded into place.
Nearby-player information is retrieved and have their visual representation instantiated.
As the player moves around, nearby environment assets (as well as other players) can be loaded piece by piece while far-away stuff is unloaded. LOD can and should be utilized.
In case it doesn't go without saying, you say a "small online based RPG" and that could mean it's small enough to load the game world at once. Understand that the limitations are mostly going to be set by the player's computer. When you're writing a networked game, the only things that need to be synchronized between players are dynamic objects (things that move and are intractable). This means that a bigger world wont cause more trouble for network traffic; more stuff moving around and being interacted with will.

Related

GLTF on demand and LOD for masive GLTF load

I am trying to load a very complex set of GLTF models in AFRAME.
My problem is very simple; my goal is to try to load about 9 million of gltf models in a unique scene.
My idea was to combine different level of detail in GLTF models depending on the camera distance and also only load those gltfs which are visible by the camera. If not the problem is that the assets are loaded in memory and my browser gets finally hung due to memory consumption.
Is this possible in AFRAME?
With some attention to A-Frame best practices, you should be able to make a performant scene with tens of thousands or even hundreds of thousands of polygons. But it will not be possible to load millions of distinct glTF models simultaneously in A-Frame, or any WebGL renderer for that matter.
Assuming you just want to show as many as models possible, try to take advantage of certain special cases:
If you need to render many copies of the same model, you can use a technique called "instancing". Check out aframe-instancing for some example code on how to do that. Depending on the complexity of your model, you may be able to show thousands (but probably not millions) of copies at once.
If you're making something like an RPG — which needs many things in the world, but only a few are in sight at any given time — then you can be clever about dividing your world into zones, and only loading models for the current zone.
Both of these are non-trivial to implement, and beyond the scope of a Stack Overflow question. My suggestion would be to try to get started on your own, and when you run into trouble, post new questions with the minimum amount of code necessary to see what you're trying to do. You may also find the A-Frame Slack group to be useful.

What's the best method to implement multiplayer on a Unity Billiard game?

I'm making an online billiard game. I've finished all the mechanics for single player, online account system, online inventory system etc. Everything's fine but I've gotten to the hardest part now, the multiplayer. I tried syncing the position of each ball every frame but the movement wasn't smooth at all, the balls would move back and forth and it looked "bad" in general. Does anyone have any solution for this ? How do other billiard games like the one in Miniclip do it, I'm honestly stuck here and frustrated as it took me a while to learn Photon networking then to find out it's not that good at handling the physics synchronization.
Would uNet be a better choice here ?
I appreciate any help you give me. Thank you!
This is done with PUN already: https://www.assetstore.unity3d.com/en/#!/content/15802
You can try to play with synchronization settings or implement custom OnPhotonSerializeView (see DemoSynchronization in PUN package). Make sure that physic simulation disabled on synchronized clients. See DemoBoxes for physics simulation sample.
Or, if balls can move along lines only, do not send all positions every frame. Send positions and velocities only when balls colliding and do simple velocity simulation between. This can work even with more comprehensive physics but general rule is the same: synchronize it at key points. Of course this is not as simple as automatic synchronization.
Also note that classic billiard is turnbased game and you do not have all the complexity of players interaction. In worst case you can 'record' simulation on current player client and 'playback' it on others.

Efficiency in drawing arbitrary meshes with OpenGL (Qt)

I am in the process of coding a level design tool in Qt with OpenGL (for a relevant example see Valve's Hammer, as Source games are what I'm primarily designing this for) and have currently written a few classes to represent 3D objects (vertices, edges, faces). I plan to implement an "object" class which ties the three together, keeps track of its own vertices, etc.
After having read up on rendering polygons on http://open.gl, I have a couple of questions regarding the most efficient way to render the content. Bear in mind that this is a level editor, so I am anticipating needing to render a large number of objects with arbitrary shapes and numbers of vertices/faces.
Edit: Updated to be less broad.
At what point would be the best point to create the VBO? The Qt OpenGL example creates a VBO when a viewport is initialized, but I'd expect it to be inefficient to create a close for each viewport.
Regarding the submitted answer, would it be a sensible idea to create one VBO for geometry, another for mesh models, etc? What happens if/when a VBO overflows?
VBOs should be re-/initialized whenever there's a need for it. Consider VBOs as memory pools. So you'd not allocate one VBO per object, but group similar objects into a single VBO. When you run out of space in one VBO you allocate a another one.
Today's GPUs are optimized for rendering indexed triangles. So GL_TRIANGLES will suffice in 90% of all cases.
Frankly modern OpenGL implementations completely ignore the buffer object access mode. So many programs did made ill use of that parameter, that it became more efficient to profile the usage pattern and adjust driver behavior toward that. However it's still a good idea to use the right mode. And in your case it's GL_STATIC_DRAW.

Synchronizing game sprites over network in XNA

I'm making a two player 3D game, and need to synchronize the location of the remote player. Only one object, the avatar representing the player, needs to be synchronized between the two games.
I thought this would be pretty straight forward, but it turns out even in a local network the remote player is moving in a stuttery way when playing on two different machines (two instances of the game on the same machine looks fine).
In the current approach one game is the server, while the other acts as a client, sending coordinates as text strings over a dedicated port. They are basically streaming avatar coordinates to each others, each time the avatar is actually moving.
Any idea how resolve lag related issues when sending/receiving coordinates? Only needs to work over a local network.
Glenn "Gaffer" Fiedler's series of articles on Networking for Game Programmers are pretty much the go-to guide for basic game network programming. He also has an article on Networked Physics.
Many of the techniques described in that series of articles (eg: reliability) are already implemented for you in libraries like Lidgren.
Basically the techniques you want to use are interpolation, extrapolation and prediction. There's a good explanation of them in this article in the aforementioned series, plus the Networked Physics one. Basically you take the unreliable, laggy stream of position data, and use that to make a visually plausible estimate of the actual path of the object.

What are the options and best practices for PV3D inspired modeling

The studio I work at is currently developing the Tony Hawk XI website and I am responsible for the flash/AS3 development. As part of the pitch, I entered an augmented reality skateboard example to be shown which impressed the client very much.
After a few weeks of getting stronger with Papervision3D, and getting to know the Flar Toolkit, I have successfully imported md2 and dae files that load and interact with my custom marker.
Now it has come time to develop some of my own models; I will be using 3DSMAX. I want to know what the limitations are on things like poly-count, character rigging and animation, texturing, tricks for exporting and creating the proper format file and any other bits of information that may save me some serious headaches down the road.
Currently I have a Quake2 MD2 model, Ernie, pulled inside of a FlarToolkit demo here.
This is very low-poly and I was wondering how many polys could I expect to get away with being that today's machines are so much faster;
Brian Hodgeblog.hodgedev.com hodgedev.com
I've heard that 2000 polys is about the threshold for good performance. In practice though, its been hit or miss and a lot of things can have an impact. So far I've run into perfomance hits when using animated movieclip materials, animated materials with an alpha chanel and precise materieals.
Having to clip objects seems to be a double edged sword. In some cases, it will increase performance by a good deal, and in others (seems to be primarily when there are alot of polys on the edge of the viewport) it'll drop the framerate by a good 10-15 fps. So, I'd say the view you setup is something to think about as well.
For example, we have a model of an interior of a store with some shelves and products and customers walking around. In total we have just under 600 triangles (according to the StatsView, which you should check out if you haven't yet: org.papervision3d.view.stats.StatsView). On my computer, which is a new computer with a quad core it runs at a steady 30fps (which is where we want it), but on an old Dell XPS (Pentium 4) it runs between 20 and 30fps depending on what objects are being clipped, etc.
We try to reduce the poly count and texture creatively to fix as many of the performance issues as possible. Unfortunatley our minimum specs are really low, so we need to do alot to get it to run well.
Edit:
Another thing we're doing is swapping out less detailed models for higher detailed ones when zoomed in. If you aren't zooming at all, than this probably won't help.
Hope that helps a bit.

Resources