Configuring Kamailio for RTP Proxying - kamailio

Would I need to install RTPProxy and RTPEngine for multiple users to send audio to one another? I'm not sure if RTPProxy and RTPEngine are two different modules that does the samething.

RTPProxy/RTPEngine are just the same thing. They are used to proxy media mainly for relaying purposes, however RTPEngine is an updated RTPProxy and you can do much more than just relaying media streams like crypto, capturing pcaps and also recording mp3/wav files.
RTPProxy http://www.kamailio.org/docs/modules/5.0.x/modules/rtpproxy.html
RTPEngine
http://www.kamailio.org/docs/modules/5.0.x/modules/rtpengine.html
The better approach as I suggest, is to use RTPEngine.

Related

Should Kamailio and Asterisk be on different servers

Are there many disadvantages/advantages to having Kamailio and Asterisk on the same Server
Thank you,
For small deployments with low number of subscribers and active calls it makes sense to save resources and run both Kamailio and Asterisk on the same server.
Otherwise, running them on separate servers is the common out there, allowing to dimension each server based on processing needs. Usually, Kamailio needs fewer resources because it handles the signalling channel only. Asterisk may have to do heavy audio processing such as transcoding, audio conferencing or complex IVR menus, so it needs more powerful systems, sometimes with dedicated hardware.

Other common protocols besides HTTP?

I usually pass data between my web servers (in different locations) using HTTP requests (sometimes using SSL if it's sensitive). I was wondering if there were any lighter protocols that I might be able to swap HTTP(S) for that would also support public/private keys like SSH or something.
I used PHP sockets to build a SMTP client before so I wouldn't mind doing that if required.
There are lots and lots and lots of protocols. Lots. Start here for a list.
http://en.wikipedia.org/wiki/Internet_Protocol_Suite
SFTP is fun for passing data around. It works well. You'll find that it's not much better than HTTP, however, because HTTP is pretty simple.
http://en.wikipedia.org/wiki/SSH_file_transfer_protocol
SMTP would work. http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol
SNMP can be made to work. http://en.wikipedia.org/wiki/Simple_Network_Management_Protocol You have to really push the envelope.
All of these, however, involve TCP/IP sockets, which involve a fair amount of overhead because of the negotiation for a connection and the acknowledgement of packets.
If you want real fun with very low overhead, use UDP.
http://en.wikipedia.org/wiki/User_Datagram_Protocol
You might want to use Reliable UDP if you're worried about messages getting dropped.
http://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
I'd like to mention XMPP in addition to protocols already listed in other answers.
It's lightweight, and it is used in some "realtime" communication systems (for example, in GTalk).
WebSocket is a good option if you are interested in keeping a connection open to pass multiple messages back and forth. It's useful for issuing updates from the server to clients in real time, for example.
Why don't you simply use FTPS:
http://en.wikipedia.org/wiki/FTPS
or SFTP
http://en.wikipedia.org/wiki/SSH_file_transfer_protocol

Scanning LAN game servers using winsock

I'm trying to figure out how to use winsockets to be able to turn my game into a LAN-playable game. I've read some winsockets documentation but I can't figure out how a client can get all the games that were created on LAN.
Does it have to try to 'connect' to each IP on LAN, like trying to connect to 192.168.0.1, then 192.168.0.2, etc? Is there a better way?
You would use broadcasting to advertise your servers on the LAN. Clients can then listen for these broadcasts to 'find' servers.
See here for more info:
http://tangentsoft.net/wskfaq/intermediate.html#broadcast
Typically these game servers use the local UDP broadcast, which is something that all clients receive and can process so long as they are listening to it.
Here is some sample client and server code I found that may be of interest to you: http://visual-c.itags.org/visual-c-c++/29424/
I think there are two possible ways to do this.
Make a "lobby" that clients and servers connect to so they can find each other through it.
Servers broadcast UDP packets. Clients listen and update a list of servers.
If you need a quick and easy way, the 2nd option would be great but remeber most of UDP packets will be wasted as they are used only once for each client.
The 1st option is more general and extensile solution to this problem. However, it might need more time to design and implement.
First off, I suggest that you get wireshark for any networking development. It will show you what packet goes through the wire. It will allow you to see how other games do it since there are many ways of doing this.
Using the UDP broadcast is one way of doing it. Simply change the target ip's last byte to 255 and you should be ok.

Can two or more SNMP agents be run on the same port (on the same machine)?

Just a technical question -
Can two or more SNMP agents be run on the same port (on the same machine)?
My first instinct would be no since host:port identifies an instance of an application but I'm not sure.
Thank you!
Technically, if the OS supports it, the SO_REUSEADDR SO_REUSEPORT options may be set on a socket to allow other processes to bind to the same address/port and thus allow multiple processes to receive messages on the same address/port. But both processes would have to set the option, and I doubt any agent implementations do that because it would not make sense to do so--it would just cause headaches having both agents potentially responding to a single request. Managers won't be equipped to handle it.
However, you can instead run an SNMP proxy in the primary address/port, configured to forward requests to one of multiple agents based on query, security, or (with SNMPv3) context/engine ID parameters, and forward responses back.
Also, using AgentX, you have an SNMP master agent running on the primary address/port, and one or more SNMP sub-agents connected to the master agent. The master agent dispatches requests to the sub-agents as appropriate, merging the results into a single response, so that to the outside world it appears as a single agent. Each sub-agent typically handles a different branch of OID space (one sub-agent implementing certain module(s), another sub-agent implementing other module(s)).
But taking two agents intended to own the address/port exclusively, and forcing them to share through the REUSE options, while it may be possible, would not be wise.
You can run multiple agents on the same host and with the same port if they have differents ip address (can use a netsh script for that).
Personnaly I use the nsoftware ddl : SecureSNMP V8 edition .NET to do this.
You can look at this post : Multiple SNMP Agents with nsoftware dll
No, two agents cannot both run on the same port as seperate applications for the reasons you assumed (except with a brittle packet sniffing hack, which we'll not go into).
However, 2 agents can be accessed through the same port if there is some mechanism that handles the actual port and distributes requests based on MIB. For example the Windows SNMP service does this, allowing any number of SNMP agents to be added as "extensions" through the registry (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SNMP\Parameters\ExtensionAgents) by writing them as DLLs and using the snmp.h headers in the platform SDK.
You are correct: ports can't be shared.
If both the agents were designed by you, then the answer can be different.
Consider the HTTP and FTP cases, we can use host names to distinguise multiple sites on the same port, then why can't we do it for SNMP?
We can create a dispatcher who monitors port 161 for incoming traffic. Then use multiple real agents to handle those traffic behind. We can feel free to design how to distinguise them. Personally I prefer the FTP virtual host name manner and use | to distinguise agents.
Maybe I can create a demo for #SNMP Suite in the future.
But if you need to work with existing agents on the same server, then such flexibility is lost.

Comparing HTTP and FTP for transferring files

What are the advantages (or limitations) of one over the other for transferring files over the Internet?
(I am aware of secure forms of both protocols. I'd like to hear comparisons through personal experiences in terms of performance, reliability, file size limitations etc.)
Here's a performance comparison of the two. HTTP is more responsive for request-response of small files, but FTP may be better for large files if tuned properly. FTP used to be generally considered faster. FTP requires a control channel and state be maintained besides the TCP state but HTTP does not. There are 6 packet transfers before data starts transferring in FTP but only 4 in HTTP.
I think a properly tuned TCP layer would have more effect on speed than the difference between application layer protocols. The Sun Blueprint Understanding Tuning TCP has details.
Heres another good comparison of individual characteristics of each protocol.
I just benchmarked a file transfer over both FTP and HTTP :
over two very good server connections
using the same 1GB .zip file
under the same network conditions (tested one after the other)
The result:
using FTP: 6 minutes
using HTTP: 4 minutes
using a concurrent http downloader software (fdm): 1 minute
So, basically under a "real life" situation:
1) HTTP is faster than FTP when downloading one big file.
2) HTTP can use parallel chunk download which makes it 6x times faster than FTP depending on the network conditions.
Many firewalls drop outbound connections which are not to ports 80 or 443 (http & https); some even drop connections to those ports that are not HTTP(S). FTP may or may not be allowed, not to speak of the active/PASV modes.
Also, HTTP/1.1 allows for much better partial requests ("only send from byte 123456 to the end of file"), conditional requests and caching ("only send if content changed/if last-modified-date changed") and content compression (gzip).
HTTP is much easier to use through a proxy.
From my anecdotal evidence, HTTP is easier to make work with dropped/slow/flaky connections; e.g. it is not needed to (re)establish a login session before (re)initiating transfer.
OTOH, HTTP is stateless, so you'd have to do authentication and building a trail of "who did what when" yourself.
The only difference in speed I've noticed is transferring lots of small files: HTTP with pipelining is faster (reduces round-trips, esp. noticeable on high-latency networks).
Note that HTTP/2 offers even more optimizations, whereas the FTP protocol has not seen any updates for decades (and even extensions to FTP have insignificant uptake by users). So, unless you are transferring files through a time machine, HTTP seems to have won.
(Tangentially: there are protocols that are better suited for file transfer, such as rsync or BitTorrent, but those don't have as much mindshare, whereas HTTP is Everywhereâ„¢)
One consideration is that FTP can use non-standard ports, which can make getting though firewalls difficult (especially if you're using SSL). HTTP is typically on a known port, so this is rarely a problem.
If you do decide to use FTP, make sure you read about Active and Passive FTP.
In terms of performance, at the end of the day they're both spewing files directly down TCP connections so should be about the same.
One advantage of FTP is that there is a standard way to list files using dir or ls. Because of this, ftp plays nice with tools such as rsync. Granted, rsync is usually done over ssh, but the option is there.
Both of them uses TCP as a transport protocol, but HTTP uses a persistent connection, which makes the performance of the TCP better.

Resources