Synchronizing game sprites over network in XNA - networking

I'm making a two player 3D game, and need to synchronize the location of the remote player. Only one object, the avatar representing the player, needs to be synchronized between the two games.
I thought this would be pretty straight forward, but it turns out even in a local network the remote player is moving in a stuttery way when playing on two different machines (two instances of the game on the same machine looks fine).
In the current approach one game is the server, while the other acts as a client, sending coordinates as text strings over a dedicated port. They are basically streaming avatar coordinates to each others, each time the avatar is actually moving.
Any idea how resolve lag related issues when sending/receiving coordinates? Only needs to work over a local network.

Glenn "Gaffer" Fiedler's series of articles on Networking for Game Programmers are pretty much the go-to guide for basic game network programming. He also has an article on Networked Physics.
Many of the techniques described in that series of articles (eg: reliability) are already implemented for you in libraries like Lidgren.
Basically the techniques you want to use are interpolation, extrapolation and prediction. There's a good explanation of them in this article in the aforementioned series, plus the Networked Physics one. Basically you take the unreliable, laggy stream of position data, and use that to make a visually plausible estimate of the actual path of the object.

Related

frequency analysis of sound

I record birds cries with two microphones. The records can go up to 3 hours and it is time-consuming on audacity to listen to the whole file each day. What I want is a script that takes my original file and gives me a bunch of short audio files, each containing a bird cry. With my microphones I am able to record in mp3 or wav. But the script should take only cries that have a higher frequency than nHz. This frequency represents the background sound that is fixed and that should not be saved. I don't know which language is the best for that and I have absolutly no idea how to do that.
Thank you all,
Thomas
This should be pretty easily doable in a variety of languages but Python is a decent place to start. I'll link you some relevant resources to get you started and then you can narrow your question if you run into problems.
To read your audio file in .wav format look at this documentation.
To take the data from your audio file and put it into a numpy array see this question and answer.
Here is the documentation for computing the Fourier transform of your data (to get the frequency content).
I would suggest taking a moving window and computing the Fourier transform of the data within that window and then saving the result to a file if there's significant content above your threshold frequency. The first link should have info on saving the audio file.
You can get some background on using the Fourier transform for this type of application from this Q&A and if it turns out that your problem is really difficult, I would suggest looking into some of the methods for speech detection.
For a more out-there suggestion, you could try frequency shifting your recording by adjusting the sample rate to make bird sounds resemble human speech and then use a black box tool like Googles VAD to pick out the bird calls. I'm not sure how well that would work though.
The problem of cutting up a long file into sections of interest is usually referred to as (automatic) Audio Segmentation. If you are willing to have a fixed audio clips out (say 10 seconds), you can also treat it as an Audio Classification problem.
The latter is very well studied problem, also applied to birds.
The DCASE2018 challenge had one taks about Bird Detection, and has lots of advanced methods. Basically all the best performing systems use a Constitutional Neural Network on log-scaled mel-spectrograms. A mel-spectrogram is 2D, so it basically becomes image classification. Many of the submissions are open source, so you can look at the code and play with them. Do note they are mostly focused on scoring well in a research competition, not to be practical tools for splitting a few files.
If you want to build your own model for this, I would recommend going with a Convolutional Neural Network pretrained on images, then pretrain on DCASE2018 data, then test it on your own data. That should give a very accurate system, though it will take a while to set up.

Setting up a multiplayer rpg map in Unity

I am wanting to make a small online based RPG game in Unity, something very small and nothing special just to learn the ropes with Networking. My question is from a design perspective, should I create all my towns, hunting areas, dungeons and pretty much anything that would of been its own scene in a single-player game in 1 gigantic scene and just position them in a way that they are no where near each other?
Would this be "over the top" for a network to handle having everything running even if there are no players in said area like DungeonA? Would what I put up top be correct if I had some markers in place that would set certain areas like DungeonA inactive if there are no players in DungeonA?
I hate for this question to come off as a "How you do it globally for all network games in Unity" but if anyone has any experience in something such as this please let me know if you can what works really well.
Thanks!
Your approach will depend on how well you can break down the world you'd like to create into small environmental components. For an interconnected system of dungeons, buildings, and cities (Think older Zelda games) separate scenes are appropriate. You can load/unload between each one since you don't need to see what's in a place until you go to that place. For a more open-world style (think Skyrim or Ghost Recon: Wildlands), Unity's concept of Scenes breaks down. The "Scene" paradigm employed by Unity assumes that a "Scene" is meant to contain some confined, definite world space. From the Unity Manual:
Think of each unique Scene file as a unique level. In each Scene, you
will place your environments, obstacles, and decorations, essentially
designing and building your game in pieces.
When you load a Scene object, you load everything in that Scene and this makes it inappropriate for an open-world experience. The correct way to go about this issue (by "correct" I mean the only way it'll be performant) would be to dynamically load the assets/environment close to the player. Your in-game "Scene" (as far as the .unity asset is concerned) would be void of all environmental objects and assets and load in a manner similar to this:
A LoadScene occurs and the player enters the game world via a menu (or whatever the face of your game looks like).
The last location of the player is retrieved (or a spawn point is determined).
Environment geometry and assets within a certain distance are loaded into place.
Nearby-player information is retrieved and have their visual representation instantiated.
As the player moves around, nearby environment assets (as well as other players) can be loaded piece by piece while far-away stuff is unloaded. LOD can and should be utilized.
In case it doesn't go without saying, you say a "small online based RPG" and that could mean it's small enough to load the game world at once. Understand that the limitations are mostly going to be set by the player's computer. When you're writing a networked game, the only things that need to be synchronized between players are dynamic objects (things that move and are intractable). This means that a bigger world wont cause more trouble for network traffic; more stuff moving around and being interacted with will.

What's the best method to implement multiplayer on a Unity Billiard game?

I'm making an online billiard game. I've finished all the mechanics for single player, online account system, online inventory system etc. Everything's fine but I've gotten to the hardest part now, the multiplayer. I tried syncing the position of each ball every frame but the movement wasn't smooth at all, the balls would move back and forth and it looked "bad" in general. Does anyone have any solution for this ? How do other billiard games like the one in Miniclip do it, I'm honestly stuck here and frustrated as it took me a while to learn Photon networking then to find out it's not that good at handling the physics synchronization.
Would uNet be a better choice here ?
I appreciate any help you give me. Thank you!
This is done with PUN already: https://www.assetstore.unity3d.com/en/#!/content/15802
You can try to play with synchronization settings or implement custom OnPhotonSerializeView (see DemoSynchronization in PUN package). Make sure that physic simulation disabled on synchronized clients. See DemoBoxes for physics simulation sample.
Or, if balls can move along lines only, do not send all positions every frame. Send positions and velocities only when balls colliding and do simple velocity simulation between. This can work even with more comprehensive physics but general rule is the same: synchronize it at key points. Of course this is not as simple as automatic synchronization.
Also note that classic billiard is turnbased game and you do not have all the complexity of players interaction. In worst case you can 'record' simulation on current player client and 'playback' it on others.

interfacing ROS and arduino

BACKSTORY
The other day I found a motorized wheel chair that someone was throwing away. Being a maker who spends a lot of time looking at what other people have made online I decided to snatch it and try to make a robot out of it. I also bought an Arduino mega, a Kinect sensor, and a motor controller to try to control the motors and give it some form of vision.
MY VISION
Honestly I don’t intend for this robot to be much more than a fun, and challenging, project. My current goals are to have it run SLAM algorithms to figure out where it is on a map and for it to navigate to predetermined points on the map. However at this point I would be happy with just being able to do a simple teleop control with the keyboard.
MY PROBLEM
I have spent the past week researching ros and how to get it talking to my Arduino. I have installed diff_drive_controller, Turtlebot, ros_control, ros_serial, ros_arduino_bridge, and several others trying to find something that will tell the motors what to do. By now I feel like I have a good barely below the surface understanding about how ros works. Basically there are a series of nodes each publishing info for the other nodes to see and subscribing to info that they want to read. All I want right now is a node that publishes data about the velocity of the motors based on it trying to navigate or teleop or something like that. I think turtlebot is my best bet considering it is an all in one stack that does everything I want it to do. The only problem is I don’t have an iRobot create. But it seems like it should be simple enough to intercept those commands and have them drive my own robot base. However I’m not sure which topic to listen on and how to run turtle bot in a way that doesn’t try to connect to an iRobot create. I could just listen to the /cmd_vel_mux/input/teleop topic but I think that would limit me to just teleop and it might make it hard to move on to autonomy in the future.
What topic should I listen on? Am I going about this the right way? Are there any packages that would be better suited for my needs? Keep in mind that I am new to ros so tutorials would be appreciated.
I look forward to your responses
Thanks, Logan
Nice sounding project! I second the recommendation to take a look at the tutorials, but I think you spend some more time with the basic ROS tutorials before diving into the world of Arduino + ROS.
For instance, I noticed one misconception I believe you may have. It doesn't really matter which topic your nodes listens to, as its just a name that can be easily remapped through a parameter given when launching the node. The important thing is to ensure you are listening to the right type of messages - they specify the interface by which all the different nodes communicate. There's a bunch of options, and if none of them fit your use case, you can define your own.
I suspect that for low-level things, such as drivers for your motors, you will need to write your own ROS nodes. For advanced functionality, such as SLAM; there's a variety of options. You can find one that's suited to the input data you have available from your sensors.
One last recommendation is to take advantage of the features of ROS that allow you to break a big problem into manageable subtasks. Do one thing at a time - implement a motor contoroller, write a teleoperation method; taking care to specify suitable interfaces at each point. The advantage of this approach is that if you made smart choices in defining individual components with good interfaces, it is very easy to replace them with another one should you so wish.
You can find a list of tutorials how to interface Arduino and ROS here.

What are the options and best practices for PV3D inspired modeling

The studio I work at is currently developing the Tony Hawk XI website and I am responsible for the flash/AS3 development. As part of the pitch, I entered an augmented reality skateboard example to be shown which impressed the client very much.
After a few weeks of getting stronger with Papervision3D, and getting to know the Flar Toolkit, I have successfully imported md2 and dae files that load and interact with my custom marker.
Now it has come time to develop some of my own models; I will be using 3DSMAX. I want to know what the limitations are on things like poly-count, character rigging and animation, texturing, tricks for exporting and creating the proper format file and any other bits of information that may save me some serious headaches down the road.
Currently I have a Quake2 MD2 model, Ernie, pulled inside of a FlarToolkit demo here.
This is very low-poly and I was wondering how many polys could I expect to get away with being that today's machines are so much faster;
Brian Hodgeblog.hodgedev.com hodgedev.com
I've heard that 2000 polys is about the threshold for good performance. In practice though, its been hit or miss and a lot of things can have an impact. So far I've run into perfomance hits when using animated movieclip materials, animated materials with an alpha chanel and precise materieals.
Having to clip objects seems to be a double edged sword. In some cases, it will increase performance by a good deal, and in others (seems to be primarily when there are alot of polys on the edge of the viewport) it'll drop the framerate by a good 10-15 fps. So, I'd say the view you setup is something to think about as well.
For example, we have a model of an interior of a store with some shelves and products and customers walking around. In total we have just under 600 triangles (according to the StatsView, which you should check out if you haven't yet: org.papervision3d.view.stats.StatsView). On my computer, which is a new computer with a quad core it runs at a steady 30fps (which is where we want it), but on an old Dell XPS (Pentium 4) it runs between 20 and 30fps depending on what objects are being clipped, etc.
We try to reduce the poly count and texture creatively to fix as many of the performance issues as possible. Unfortunatley our minimum specs are really low, so we need to do alot to get it to run well.
Edit:
Another thing we're doing is swapping out less detailed models for higher detailed ones when zoomed in. If you aren't zooming at all, than this probably won't help.
Hope that helps a bit.

Resources