What's the meaning of AS for SDP - networking

i can not understand the meaning of as in sdp, i checkout rfc2327 described as below:
AS Application-Specific Maximum: The bandwidth is interpreted to be application-specific, i.e., will be the application’s concept of maximum bandwidth. Normally this will coincide with what is set on the application’s "maximum bandwidth" control if applicable.
My question is how to caculate as in my program?
I looked up a lot of information, but in vain, I wonder if the value of as is the meaning of the codec's bit rate?

The SDP protocol allows an entity taking part in a communications session to provide some information about the session and/or its capabilities for a session it wants to set up or join.
This can include information like the connection type, contact details for the organiser of a conference, timing for a conference or session and media information.
Bandwidth is one of these pieces of information - it allows the organiser or client joining or setting up a session to indicate how much bandwidth they expect to use.
The bandwidth info allows modifiers and one of them is the one you have flagged - this is simply saying that this is the maximum bandwidth that this application is expecting to use for the session.
It's an option piece of information so you may not need it depending on your application.
Note that codec information is usually included in the media and rtp attributes - if you look at some SIP (session initiation protocol used for many VoIP services) you can see examples of codec information being shared - for example for an audio call (between Bob and Alice as is usual in these specs):
v=0
o=bob 2890844527 2890844527 IN IP4 client.biloxi.example.com
s=-
c=IN IP4 192.0.2.201
t=0 0
m=audio 3456 RTP/AVP 0
a=rtpmap:0 PCMU/8000
Lots of of other examples also in the SIP examples RFC: https://datatracker.ietf.org/doc/html/rfc3665

Related

Deciding whether to use ws or wss?

I have an account with a trading exchange and they have a websockets API which supports ws://... and wss://...
For the non-authenticated channels such as the current state of the orderbook, is it an easy decision to just use ws, mostly for the (however minimal) time savings? Obviously I would like to have my data as recent as possible.
I just want to check there's not some other factor which is more important than a few TLS encryption CPU cycles and ms in latency saved.
I see absolutely no reason not to use wss for all your connections, particular with a webSocket. The normal webSocket use is to make a connection, then keep that connection for a long time and use it. While there is a bit of overhead on every transmission because of the encryption, the main wss overhead is when you first make the connection and that only happens once per connection.
I just want to check there's not some other factor which is more important than a few TLS encryption CPU cycles and ms in latency saved.
No, there is not some other factor. In fact, the opposite. There are more and more and more reasons these days to use TLS whenever possible to protect your privacy.
For the non-authenticated channels such as the current state of the orderbook, is it an easy decision to just use ws, mostly for the (however minimal) time savings?
Why? If wss is available, I'd be using it for everything. If you actually run into a CPU problem down the road, you could revisit whether using wss has anything to do with it, but that is unlikely to happen and, in my opinion, you have nothing to lose by starting with wss. While one wants to design code intelligently, you don't want to try to micro-optimize performance-related things before you even have a documented, measured performance issue to worry about.
General reasons to use TLS:
Privacy (nobody in the middle can snoop on what you're doing)
Security of data (nobody can read your data, not even proxies)
Security of endpoint (endpoint you're connecting to can't be hijacked without you knowing about it)
I just want to check there's not some other factor which is more important than a few TLS encryption CPU cycles and ms in latency saved.
Actually, there is.
This is also stated in the RFC:
At the time of writing of this specification, it should be noted that connections on ports 80 and 443 have significantly different success rates, with connections on port 443 being significantly more likely to succeed, though this may change with time.
Some network intermediaries (especially with some mobile providers) will fail on ws connections but work properly when using wss.
The reason seems to be that these intermediaries (proxies / routers) will attempt to read the WebSocket message as if it were HTTP and "fix" HTTP errors or resolve caching (which actually corrupts WebSocket data).
The encrypted wss protocol will trigger a pass-through mode, since these intermediaries won't be able to read the data or "fix" any HTTP errors.
The Websocket protocol uses client-side frame masking for the same purpose, but sometimes with limited results. Using wss increases connectivity on some networks.

Should I use UDP or TCP for my Minecraft-style game?

I'm creating a 2D Minecraft style game, where the map is stored in a 2-dimensional int array.
You can place and destroy blocks and ai-controlled characters will walk around the map.
The game's made using xna/c#.
The problem is that I don't have much experience coding networked games.
Which protocol should I use? UDP, TCP? Or perhaps the Lindgren library? (which uses UDP + reliability)
Should I let the following things be done on the client, server or on both?
ai/pathfinding
collision detection
Also, is it good practice to send destroy and place-block messages to the server?
I guess that when the client starts, it first needs to download the map. And then changes to the map will be made in parallel to the maps on the clients and the one on the server...
Finally, should I broadcast the positions of the characters only when they change (tends towards TCP) or should I continually send them (tends towards UDP)?
I feel as if the thread below has some nice information that will be useful to you. Like AmitApollo said, in a nutshell UDP is faster, but less reliable. If all your information you're sending across this network is absolutely vital then TCP might be the best implementation. You could always try both and see what kinds of performance/latency hits you have. In general, most fast paced/realtime games I've read up on have used UDP.
Android game UDP / TCP?
Everything should be verified by the server so that the game is not hackable or modifiable; either by modifying memory addresses or some other vulnerability.
Both AI/pathfinding and collision should be validated by the server however using TCP for both would cause congestion because of the handshake and window overhead. MMO's today use UDP packets with custom congestion control and handshakes. As a first version you should simply use a UDP packet - when packets are lost or dropped in transmission the game will simply lag and freeze until a UDP packet gets through. Subsequent versions of your game could implement a custom acknowledgment setup with UDP such that the character will pause until validated.
Client -- Movement request UDP --> Server
Client: Character is frozen Server: Validate coordinates on map
Client: <-- YES or NO to move request -- Server
Client: Move character based on response
This makes sure that every movement by the characters are valid. You will also need security keys or protocol security so that not just anyone can send coordinates to be validated.
You may think this design will lag your game but if properly designed it will be secure and free from client-side hacking. Remember to design your UDP packets to be as small as possible (size-wise).

IPv6 Header Priority

From this site (http://www.ipv6.com/articles/general/IPv6-Header.htm) , it says:
Packet priority/Traffic class (8 bits) The 8-bit Priority field in the IPv6 header can assume different values to enable the source node to differentiate between the packets generated by it by associating different delivery priorities to them. This field is subsequently used by the originating node and the routers to identify the data packets that belong to the same traffic class and distinguish between packets with different priorities.
I was wondering, if it is possible to actually "hack" the TCP/IP stack in order to give your packets higher priority. Would you get any substantial gain in network performance. Also, if it is possible, how is it prevented?
Yes, it's possible, but it's not really hacking. There is a standard programming interface that will allow your program to indicate to the stack how it would like the Traffic Class header field to be populated.
Whether or not you will measure any performance difference depends on the network that handles your packets. Think of the Traffic Class field as a hint for the network; a suggestion for how you would like your packet to be handled. The network might ignore it, or even change it to a different code point. Furthermore, the notion of "priority" (also known as "precedence") as an interpretation of the Traffic Class field has receded into a much richer collection of Per Hop Behaviors (PHBs).
See IETF RFC 3542 Advanced Sockets Application Program Interface (API) for IPv6. In particular, read the first part of Section 4, Access to IPv6 and Extension Headers, and Section 6.5, Specifying/Receiving the Traffic Class value.
Here is a code snippet that sets the Traffic Class field to the integer MY_TCLASS for all packets sent on the socket sk.
int tclass;
tclass = MY_TCLASS;
setsockopt(sk, IPPROTO_IPV6, IPV6_TCLASS, &tclass, sizeof(int));
Related reading:
IETF RFC 3493 Basic Socket Interface Extensions for IPv6
Section 5 talks about basic socket options
IETF RFC 2474 Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers
Section 7.1 discusses Theft and Denial of Service, which, from the point of view of a network operator, is what what you're asking about.
IETF RFC 2475 An Architecture for Differentiated Services
Section 2.1 covers a whole bunch of terminology.
I don't understand the question. You don't need to hack anything. There's an API provided for setting the TC on a socket. What effect it has depends on the cooperation of the intervening routers.
The source can change the priority but the routers and gateway will can change the priority depending upon the type of packet that is

How can i force tcp to discard the oldest data segment in the buffer and accept the new data written by the application in NS2

I'm trying to adjust TCP to work well over real-time communication. To do this one of the specification is to force TCP to accept new data written by the application even when the buffer is full which makes TCP sometimes 'unreliable'. This way the applications write calls are never blocked and the timing of the sender application is not broken.
I think there must be an option in NS2 to make it possible.
So, How can I force TCP to discard the oldest data segment in the buffer and accept the new data written by the application in NS2?
You cannot. TCP is a "reliable stream". Any functionality that allowed data to be dropped would be counter to that goal and so there is no such support.
If you want to be able to drop data, you're going to have to switch to something like UDP and implement your own windowing/retry if you want "mostly reliable delivery" instead of "best effort".
If you are going to be dropping data anyway, just drop it before you send it to the socket. You can use select to see if the socket is available for writing, and if not drop the data at the application layer. If it is of the utmost importance that you have the latest freshest data, then see Brian's answer.
Edit
On a side note you may want to google for real time network protocols, and see what already exists.

Why Does RTP use UDP instead of TCP?

I wanted to know why UDP is used in RTP rather than TCP ?. Major VoIP Tools used only UDP as i hacked some of the VoIP OSS.
As DJ pointed out, TCP is about getting a reliable data stream, and will slow down transmission, and re-transmit corrupted packets, in order to achieve that.
UDP does not care about reliability of the communication, and will not slow down or re-transmit data.
If your application needs a reliable data stream, for example, to retrieve a file from a webserver, you choose TCP.
If your application doesn't care about corrupted or lost packets, and you don't need to incur the additional overhead to provide the additional reliability, you can choose UDP instead.
VOIP is not significantly improved by reliable packet transmission, and in fact, in some cases things in TCP like retransmission and exponential backoff can actually hurt VOIP quality. Therefore, UDP was a better choice.
A lot of good answers have been given, but I'd like to point one thing out explicitly:
Basically a complete data stream is a nice thing to have for real-time audio/video, but its not strictly necessary (as others have pointed out):
The important fact is that some data that arrives too late is worthless. What good is the missing data for a frame that should have been displayed a second ago?
If you were to use TCP (which also guarantees the correct order of all data), then you wouldn't be able to get to the more up-to-date data until the old one is transmitted correctly. This is doubly bad: you have to wait for the re-transmission of the old data and the new data (which is now delayed) will probably be just as worthless.
So RTP does some kind of best-effort transmission in that it tries to transfer all available data in time, but doesn't attempt to re-transmit data that was lost/corrupted during the transfer (*). It just goes on with life and hopes that the more important current data gets there correctly.
(*) actually I don't know the specifics of RTP. Maybe it does try to re-transmit, but if it does then it won't be as aggressive as TCP is (which won't ever accept any lost data).
The others are correct, however the don't really tell you the REAL reason why. Saua kind of hints at it, but here's a more complete answer.
Audio and Video is real-time. If you are listening to a radio, or watching TV, and the signal is interrupted, it doesn't pick up where you left off.. you're just "observing" the signal as it streams, and if you can't observe it at any given time, you lose it.
The reason, is simple. Delay. VOIP tries very hard to minimize the amount of delay from the time someone speaks into one end and you get it on your end, and your response back. Otherwise, as errors occured, the amount of delay between when the person spoke and when the signal was received would continuously grow until it became useless.
Remember, each delay from a retransmission has to be replayed, and that causes further data to be delayed, then another error causes an even greater delay. The only workable solution is to simply drop any data that can't be displayed in real-time.
A 1 second delay from retransmission would mean it would now be 1 second from the time I said something until you heard it. A second 1 second delay now means it's 2 seconds from the time i say something until you hear it. This is cumulative because data is played back at the same rate at which it is spoken, and so on...
RTP could be connection oriented, but then it would have to drop (or skip) data to keep up with retransmission errors anyways, so why bother with the extra overhead?
Technically RTP packets can be interleaved over a TCP connection. There are lots of great answers given here. Two additional minor points:
RFC 4588 describes how one could use retransmission with RTP data. Most clients that receive RTP streams employ a buffer to account for jitter in the network that is typically 1-5 seconds long and which means there is time available for a retransmit to receive the desired data.
RTP traffic can be interleaved over a TCP connection. In practice when this is done, the difference between Interleaved RTP (i.e. over TCP) and RTP sent over UDP is how these two perform over a lossy network with insufficient bandwidth available for the user. The Interleaved TCP stream will end up being jerky as the player continually waits in a buffering state for packets to arrive. Depending on the player it may jump ahead to catch up. With an RTP connection you will get artifacts (smearing/tearing) in the video.
UDP is often used for various types of realtime traffic that doesn't need strict ordering to be useful. This is because TCP enforces an ordering before passing data to an application (by default, you can get around this by setting the URG pointer, but no one seems to ever do this) and that can be highly undesirable in an environment where you'd rather get current realtime data than get old data reliably.
RTP is fairly insensitive to packet loss, so it doesn't require the reliability of TCP.
UDP has less overhead for headers so that one packet can carry more data, so the network bandwidth is utilized more efficiently.
UDP provides fast data transmission also.
So UDP is the obvious choice in cases such as this.
Besides all the others nice and correct answers this article gives a good understanding about the differences between TCP and UDP.
The Real-time Transport Protocol is a network protocol used to deliver streaming audio and video media over the internet, thereby enabling the Voice Over Internet Protocol (VoIP).
RTP is generally used with a signaling protocol, such as SIP, which sets up connections across the network. RTP applications can use the Transmission Control Protocol (TCP), but most use the User Datagram protocol (UDP) instead because UDP allows for faster delivery of data.
UDP is used wherever data is send, that does not need to be exactly received on the target, or where no stable connection is needed.
TCP is used if data needs to be exactly received, bit for bit, no loss of bits.
For Video and Sound streaming, some bits that are lost on the way do not affect the result in a way, that is mentionable, some pixels failing in a picture of a stream, nothing that affects a user, on DVDs the lost bit rate is higher.
just a remark:
Each packet sent in an RTP stream is given a number one higher than its predecessor.This allows thr destination to determine if any packets are missing.
If a packet is mising, the best action for the destination to take is to approximate the missing vaue by interpolation.
Retranmission is not a proctical option since the retransmitted packet would be too late to be useful.
I'd like to add quickly to what Matt H said in response to Stobor's answer. Matt H mentioned that RTP over UDP packets can be checksum'ed so that if they are corrupted, they will get resent. This is actually an optional feature on most PBXs. In Asterisk, for example, you can enable / disable checksums on your RTP over UDP traffic in the rtp.conf configuration file with the following line:
rtpchecksums=yes ; or no if you prefer
Cheers!

Resources