I'm creating a video chat application, and no matter what combination of Camera/Microphone/NetStream properties and functions I use, I cannot get high quality video/audio. I get occasional audio latency, pixelated video, occasional frozen video and the degree of each depends on the combination of properties/functions I set/call.
Others such as TokBox, TinyChat, Chat Roulette, etc. have achieved great video/audio quality with FMS, what is the secret? At least point me in the right direction, because right now I'm not impressed with FMS ability to provide a good video/audio experience.
BTW, I'm using a P2P mesh using a group specifier, not NetStream.DIRECT_CONNECTIONS.
Thanks in advance!
Try Camera.setQuality function (on the client), particularly the 'bandwidth' parameter. You might also want to look at your own internet/server connection since by default the quality changes appropriately to the bandwidth. Those other sites have server farms with massive amounts of bandwidth, something I doubt you have yourself.
If you are using RTMFP, You cannot depend on the quality as it is a peer-2-peer technology. If you need quality and recording stuff, go for RTMP.
Related
And could this make it so that each corresponding part of the video can only be played once that key becomes available? Looking for a low resource way to "stream" a premade video. The client in this scenario would handle all the timing etc., but the encryption would keep anyone from looking ahead.
I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.
I'm assigned to a project where my code is supposed to perform uploads and downloads of some files on the same FTP or HTTP server simultaneously. The speed is measured and some conclusions are being made out of this.
Now, the problem is that on high-speed connections we're getting pretty much expected results in terms of throughput, but on slow connections (think ideal CDMA 1xRTT link) either download or upload wins at the expense of the opposite direction. I have a "higher body" who's convinced that CDMA 1xRTT connection is symmetric and thus we should be able to perform data transfer with equivalent speeds (~100 kbps in each direction) on this link.
My measurements show that without heavy tweaking the code in terms of buffer sizes and data link throttling it's not possible to have same speeds in forementioned conditions. I tried both my multithreaded code and also created a simple batch file that automates Windows' ftp.exe to perform data transfer -- same result.
So, the question is: is it really possible to perform data transfer on a slow symmetrical link with equivalent speeds? Is a "higher body" right in their expectations? If yes, do you have any suggestions on what should I do with my code in order to achieve such throughput?
PS.
I completely re-wrote the question, so it would be obvious it belongs to this site.
CDMA 1x consists of up to 15 channels of 9.6kbps traffic. This results in a total throughput of 144kbps.
Two channels are used for command and control signals (talking to base stations, associating/disassociating, SMS traffic, ring signals, etc).
That leaves you with up to 124.8kbps.
--> Each channel is one way. <--
They are dynamically switched and allocated depending on the need.
Generally you'll get more download than upload because that's the typical cell phone modem usage. But you'll never get more than 120kbps total aggregate bandwidth.
In practise, due to overhead of 1xRTT encoding, error correction, resends, etc, you'll typically experience between 60kbps and 90kbps even if you have all the channels possible.
This means that you can probably only get 30kbps-60kbps of upload and download simultaneously.
Further, due to switching the channels dynamically (and the fact that the base station controls this more than your modem - they need to manage base station channels carefully to keep channels free for voice calls) you'll lose time when it switches channels - it's not an instantaneous process.
So - 1xRTT can, in theory, give you 124kbps one way, but due to overhead, switching times, base station capacity, or the phone company simply limiting such connections for other reasons, you can't depend on a symmetrical link.
NOTE:
This will vary to some degree based on the provider and the modem. For instance, some modems have 16 channels, and some providers support 16 channels. In some cases those modems and providers work well together and can provide a full 144kbps aggregate raw bandwidth to the application, with only one dedicated channel (which has to work pretty hard) to deal with control, switching, and other issues. Even then, though, with the overhead of the modem communications, then the overhead of PPP, then the overhead of IP, then the overhead of TCP, you're still looking at maybe 100-120kbps total bandwidth, both up and down.
Lastly, no provider yet supports transparent transfer of IP traffic. In other words if you're modem is moving, the modem will switch to a new base station, but you'll completely drop the PPP session and have to restart it, as well as all the TCP sessions and such. You typically won't get the same IP address, and so your TCP sessions will not recover gracefully.
The "fun" aspect to this twist is that this can happen even if you aren't moving. If one base station gets loaded down, you may be transferred to another base station if you are close enough - there are other things that may make your modem transfer even without you moving. So make sure you take this into account, since you seem to be keen on maintaining a full duplex, symmetric channel open. It's tough to write stuff that will recover gracefully, nevermind anticipate it and do it quickly. You would do well to work very closely with a modem manufacturer (such as Kyocera) on this - otherwise you won't get the documentation on how to control the modem chipset at the low level that you need.
-Adam
I think the whole drama with high equal speeds on both directions is because my higher body thinks that they have 144 kbps on uplink AND 144 kbps on DOWNLINK (== TWO pipes). Whereas in reality we have 144 kbps of ONE pipe which is switching directions when I transfer files.
Comment me if I right or wrong, please.
I am working on a client proposal and they will need to upgrade their network infrastructure to support hosting an ASP.NET application. Essentially, I need to estimate peak usage for a system with a known quantity of users (currently 250). A simple answer like "you'll need a dedicated T1 line" would probably suffice, but I'd like to have data to back it up.
Another question referenced NetLimiter, which looks pretty slick for getting a sense of what's being used.
My general thought is that I'll fire the web app up and use the system like I would anticipate it be used at the customer, really at a leisurely pace, over a certain time span, and then multiply the bandwidth usage by the number of users and divide by the time.
This doesn't seem very scientific. It may be good enough for a proposal, but I'd like to see if there's a better way.
I know there are load tools available for testing web application performance, but it seems like these would not accurately simulate peak user load for bandwidth testing purposes (too much at once).
The platform is Windows/ASP.NET and the application is hosted within SharePoint (MOSS 2007).
In lieu of a good reporting tool for bandwidth usage, you can always do a rough guesstimate.
N = Number of page views in busiest hour
P = Average Page size
(N * P) /3600) = Average traffic per second.
The server itself will have a lot more internal traffic for probably db server/NAS/etc. But outward facing that should give you a very rough idea on utilization. Obviously you will need to far surpass the above value as you never want to be 100% utilized, and to allow for other traffic.
I would also not suggest using an arbitrary number like 250 users. Use the heaviest production day/hour as a reference. Double and triple if you like, but that will give you the expected distribution of user behavior if you have good log files/user auditing. It will help make your guesstimate more accurate.
As another commenter pointed out, a data center is a good idea, when redundancy and bandwidth availability become are a concern. Your needs may vary, but do not dismiss the suggestion lightly.
There are several additional questions that need to be asked here.
Is it 250 total users, or 250 concurrent users? If concurrent, is that 250 peak, or 250 typically? If it's 250 total users, are they all expected to use it at the same time (eg, an intranet site, where people must use it as part of their job), or is it more of a community site where they may or may not use it? I assume the way you've worded this that it is 250 total users, but that still doesn't tell enough about the site to make an estimate.
If it's a community or "normal" internet site, it will also depend on the usage - eg, are people really going to be using this intensely, or is it something that some users will simply log into once, and then forget? This can be a tough question from your perspective, since you will want to assume the former, but if you spend a lot of money on network infrastructure and no one ends up using it, it can be a very bad thing.
What is the site doing? At the low end of the spectrum, there is a "typical" web application, where you have reasonable size (say, 1-2k) pages and a handful of images. A bit more intense is a site that has a lot of media - eg, flickr style image browsing. At the upper end is a site with a lot of downloads - streaming movies, or just large files or datasets being downloaded.
This is getting a bit outside the threshold of your question, but another thing to look at is the future of the site: is the usage going to possibly double in the next year, or month? Be wary of locking into a long term contract with something like a T1 or fiber connection, without having some way to upgrade.
Another question is reliability - do you need redundancy in connections? It can cost a lot up front, but there are ways to do multi-homed connections where you can balance access across a couple of links, and then just use one (albeit with reduced capacity) in the event of failure.
Another option to consider, which effectively lets you completely avoid this entire question, is to just host the application in a datacenter. You pay a relatively low monthly fee (low compared to the cost of a dedicated high-quality connection), and you get as much bandwidth as you need (eg, most hosting plans will give you something like 500GB transfer a month, to start with - and some will just give you unlimited). The datacenter is also going to be more reliable than anything you can build (short of your own 6+ figure datacenter) because they have redundant internet, power backup, redundant cooling, fire protection, physical security.. and they have people that manage all of this for you, so you never have to deal with it.
I'm thinking about making a networked game. I'm a little new to this, and have already run into a lot of issues trying to put together a good plan for dead reckoning and network latency, so I'd love to see some good literature on the topic. I'll describe the methods I've considered.
Originally, I just sent the player's input to the server, simulated there, and broadcast changes in the game state to all players. This made cheating difficult, but under high latency things were a little difficult to control, since you dont see the results of your own actions immediately.
This GamaSutra article has a solution that saves bandwidth and makes local input appear smooth by simulating on the client as well, but it seems to throw cheat-proofing out the window. Also, I'm not sure what to do when players start manipulating the environment, pushing rocks and the like. These previously neutral objects would temporarily become objects the client needs to send PDUs about, or perhaps multiple players do at once. Whose PDUs would win? When would the objects stop being doubly tracked by each player (to compare with the dead reckoned version)? Heaven forbid two players engage in a sumo match (e.g. start pushing each other).
This gamedev.net bit shows the gamasutra solution as inadequate, but describes a different method that doesn't really fix my collaborative boulder-pushing example. Most other things I've found are specific to shooters. I'd love to see something more geared toward games that play like SNES Zelda, but with a little more physics / momentum involved.
Note: I'm not asking about physics simulation here -- other libraries have that covered. Just strategies for making games smooth and reactive despite network latency.
Check out how Valve does it in the Source Engine: http://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
If it's for a first person shooter you'll probably have to delve into some of the topics they mention such as: prediction, compensation, and interpolation.
I find this network physics blog post by Glenn Fiedler, and even more so the response/discussion below it, awesome. It is quite lengthy, but worth-while.
In summary
Server cannot keep up with reiterating simulation whenever client input is received in a modern game physics simulation (i.e. vehicles or rigid body dynamics). Therefore the server orders all clients latency+jitter (time) ahead of server so that all incomming packets come in JIT before the server needs 'em.
He also gives an outline of how to handle the type of ownership you are asking for. The slides he showed on GDC are awesome!
On cheating
Mr Fiedler himself (and others) state that this algorithm suffers from not being very cheat-proof. This is not true. This algorithm is no less easy or hard to exploit than traditional client/server prediction (see article regarding traditional client/server prediction in #CD Sanchez' answer).
To be absolutely clear: the server is not easier to cheat simply because it receives network physical positioning just in time (rather than x milliseconds late as in traditional prediction). The clients are not affected at all, since they all receive the positional information of their opponents with the exact same latency as in traditional prediction.
No matter which algorithm you pick, you may want to add cheat-protection if you're releasing a major title. If you are, I suggest adding encryption against stooge bots (for instance an XOR stream cipher where the "keystream is generated by a pseudo-random number generator") and simple sanity checks against cracks. Some developers also implement algorithms to check that the binaries are intact (to reduce risk of cracking) or to ensure that the user isn't running a debugger (to reduce risk of a crack being developed), but those are more debatable.
If you're just making a smaller indie game, that may only be played by some few thousand players, don't bother implementing any anti-cheat algorithms until 1) you need them; or 2) the user base grows.
we have implemented a multiplayer snake game based on a mandatory server and remote players that make predictions. Every 150ms (in most cases) the server sends back a message containing all the consolidated movements sent by each remote player. If remote client movements arrive late to the server, he discards them. The client the will replay last movement.
Check out Networking education topics at the XNA Creator's Club website. It delves into topics such as network architecture (peer to peer or client/server), Network Prediction, and a few other things (in the context of XNA of course). This may help you find the answers you're looking for.
http://creators.xna.com/education/catalog/?contenttype=0&devarea=19&sort=1
You could try imposing latency to all your clients, depending on the average latency in the area. That way the client can try to work around the latency issues and it will feel similar for most players.
I'm of course not suggesting that you force a 500ms delay on everyone, but people with 50ms can be fine with 150 (extra 100ms added) in order for the gameplay to appear smoother.
In a nutshell; if you have 3 players:
John: 30ms
Paul: 150ms
Amy: 80ms
After calculations, instead of sending the data back to the clients all at the same time, you account for their latency and start sending to Paul and Amy before John, for example.
But this approach is not viable in extreme latency situations where dialup connections or wireless users could really mess it up for everybody. But it's an idea.