I would like to create an FPS game but need to manage bandwidth on servers. What things do I need to design in to do this?
For example should I use TCP or UDP?
What data do I need to send and should I compress it somehow?
I'm thinking I need position, heading, pitch, firing gun boolean.
Other considerations I've missed?
Any help is appreciated.
Thats not true. How you can say, that all all real-time action games using UDP. You can't know, because you don't have write these games and only the networkprogrammer of these games knows, what they are using.
I personally know, that 2 realtime network shooter (one of them are mmo) definitly use TCP network. Its save and better to handle. And at last. If you program UDP to make a game, you must code 80% of the protocols TCP to make your packettransfer save. So, UDP is unacceptable today.
Best Regards.
May I direct you to the best online resource I've found on this subject?
Networking for Game Programmers
I highly recommend creating a 2D networked game before you even think about making a 3D one. Once you've read Fiedler's articles, you'll have a much better idea of how to accomplish this.
Some general notes:
All real-time action games use UDP. No exceptions.
In order to hide client-side input lag, implement client-prediction.
In order to hide any other lag, implement interpolation and extrapolation.
Have fun!
Related
I'm trying to write an app that will be able to see when a lot of people are close inside a single area. I was thinking to use BLE. I don't have a good knowledge of this technology and I have a single, but big question. In modern days, how easy is it to see a BL device that is actually transmitting and discoverable in places like streets, etc...? It's my idea very dumb? Will my idea work with the spread of contact-tracing apps?? Thank you.
You can scan the Bluetooth frequency bands to detect devices that are advertising themselves, so yes, this is possible.
What might be more difficult is making sense of it all. If I run a Bluetooth scanner in my study, I get dozens of devices, most of which I have no idea what they are. A lot of them will probably belong to the people living next door; some advertise their name (Apple TV, for example), but most don't. If you look at the signal strength you can probably rule out a fair few devices that are too far away for your purposes.
I'm making an online billiard game. I've finished all the mechanics for single player, online account system, online inventory system etc. Everything's fine but I've gotten to the hardest part now, the multiplayer. I tried syncing the position of each ball every frame but the movement wasn't smooth at all, the balls would move back and forth and it looked "bad" in general. Does anyone have any solution for this ? How do other billiard games like the one in Miniclip do it, I'm honestly stuck here and frustrated as it took me a while to learn Photon networking then to find out it's not that good at handling the physics synchronization.
Would uNet be a better choice here ?
I appreciate any help you give me. Thank you!
This is done with PUN already: https://www.assetstore.unity3d.com/en/#!/content/15802
You can try to play with synchronization settings or implement custom OnPhotonSerializeView (see DemoSynchronization in PUN package). Make sure that physic simulation disabled on synchronized clients. See DemoBoxes for physics simulation sample.
Or, if balls can move along lines only, do not send all positions every frame. Send positions and velocities only when balls colliding and do simple velocity simulation between. This can work even with more comprehensive physics but general rule is the same: synchronize it at key points. Of course this is not as simple as automatic synchronization.
Also note that classic billiard is turnbased game and you do not have all the complexity of players interaction. In worst case you can 'record' simulation on current player client and 'playback' it on others.
BACKSTORY
The other day I found a motorized wheel chair that someone was throwing away. Being a maker who spends a lot of time looking at what other people have made online I decided to snatch it and try to make a robot out of it. I also bought an Arduino mega, a Kinect sensor, and a motor controller to try to control the motors and give it some form of vision.
MY VISION
Honestly I don’t intend for this robot to be much more than a fun, and challenging, project. My current goals are to have it run SLAM algorithms to figure out where it is on a map and for it to navigate to predetermined points on the map. However at this point I would be happy with just being able to do a simple teleop control with the keyboard.
MY PROBLEM
I have spent the past week researching ros and how to get it talking to my Arduino. I have installed diff_drive_controller, Turtlebot, ros_control, ros_serial, ros_arduino_bridge, and several others trying to find something that will tell the motors what to do. By now I feel like I have a good barely below the surface understanding about how ros works. Basically there are a series of nodes each publishing info for the other nodes to see and subscribing to info that they want to read. All I want right now is a node that publishes data about the velocity of the motors based on it trying to navigate or teleop or something like that. I think turtlebot is my best bet considering it is an all in one stack that does everything I want it to do. The only problem is I don’t have an iRobot create. But it seems like it should be simple enough to intercept those commands and have them drive my own robot base. However I’m not sure which topic to listen on and how to run turtle bot in a way that doesn’t try to connect to an iRobot create. I could just listen to the /cmd_vel_mux/input/teleop topic but I think that would limit me to just teleop and it might make it hard to move on to autonomy in the future.
What topic should I listen on? Am I going about this the right way? Are there any packages that would be better suited for my needs? Keep in mind that I am new to ros so tutorials would be appreciated.
I look forward to your responses
Thanks, Logan
Nice sounding project! I second the recommendation to take a look at the tutorials, but I think you spend some more time with the basic ROS tutorials before diving into the world of Arduino + ROS.
For instance, I noticed one misconception I believe you may have. It doesn't really matter which topic your nodes listens to, as its just a name that can be easily remapped through a parameter given when launching the node. The important thing is to ensure you are listening to the right type of messages - they specify the interface by which all the different nodes communicate. There's a bunch of options, and if none of them fit your use case, you can define your own.
I suspect that for low-level things, such as drivers for your motors, you will need to write your own ROS nodes. For advanced functionality, such as SLAM; there's a variety of options. You can find one that's suited to the input data you have available from your sensors.
One last recommendation is to take advantage of the features of ROS that allow you to break a big problem into manageable subtasks. Do one thing at a time - implement a motor contoroller, write a teleoperation method; taking care to specify suitable interfaces at each point. The advantage of this approach is that if you made smart choices in defining individual components with good interfaces, it is very easy to replace them with another one should you so wish.
You can find a list of tutorials how to interface Arduino and ROS here.
First, this is a question not only about network lag, but delay from wireless controllers and delay to TVs/Monitors. For a fast-paced action game, how do you compensate for these things? Has anyone come up with a reusable way to do this? If so, I haven't seen one. I'd like to see a game framework that includes this out of the box.
If there is no previous examples available, how would one implement this into an Update(float deltaTime) loop? Would it be possible to disguise or hide lag?
Dead Reckoning: Latency Hiding for Networked Games
Valve Developer Communnity: Source Multiplayer Networking
Ah!, it has been asked previously: Dealing with Latency in Networked Games
Voting to close.
We have a device that has an analog camera. We have a card that samples it and digitizes it. This is all done in directx. At this point in time, replacing hardware is not an option, but we need to code such that we can see this video feed real-time regardless of any hardware or underlying operating system changes occur in the future.
Along this line, we've chosen Qt to implement a GUI to view this camera feed. However, if we move to a linux or other embedded platform in the future and change other hardware (including the physical device where the camera/video sampler lives), we will need to change the camera display software as well, and that's going to be a pain because we need to integrate it into our GUI.
What i proposed was migrating to a more abstract model where data is sent over a socket to the GUI and the video is displayed live after being parsed from the socket stream.
First, is this a good idea or a bad idea?
Secondly, how would you implement such a thing? How do the video samplers usually give usable output? How can I push this output over a socket? Once I am on the receiving end parsing the output, how do I know what to do with the output (as in how to get the output to render)? The only thing I can think of would be to write each sample to a file and then to display the contents of the file every time a new sample arrives. This seems like an inefficient solution to me, if it would work at all.
How do you recommend I handle this? Are there any cross-platform libraries available for such a thing?
Thank you.
edit: i am willing to accept suggestions of something different rather than what is listed above.
Have you looked at QVision? It is a Qt based framework for managing video and video processing. You don't need the processing, but I think it will do what you want.
Anything that duplicates the video stream is going to cost you in performance, especially in an embedded space. In most situations for video, I think you're better off trying to use local hardware acceleration to blast the video directly to the screen. With some proper encapsulation, you should be able to use Qt for the GUI surrounding the video, and have a class that is platform specific that you use to control the actual video drawing to the screen (where to draw, and how big, etc.).
Edit:
You may also want to look at the Phonon library. I haven't looked at it much, but it appears to support showing video that may be acquired from a range of different sources.