SIP test platform [closed] - simulator

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am searching for a tool that tests SIP calls. A platform that makes a call from SIP device A to SIP device B and reports results...
Any idea? A simulation platform would be ideal.
thnx,
cateof

There are many solutions. Some more broken than others. Here's a quick summary of what I've found while looking for a base for a proper automated testing solution.
SIPp
It's ok if you want only single dialog at a time. What doesn't work here is complex solutions where you need to synchronise 2 call legs, do registration, call and presence in the same scenario. If you go this way, you'll end up with running multiple sipp scenarios for each conversation element separately. Sipp also doesn't scale at all for media transfers. Even though it's multithreaded, something stops it from running concurrently - if you look at htop for example, you'll see that sipp never crosses the 100% line. Around 50 media calls it starts to cut audio and take all CPU of the machine.
It can sometimes lose track of what's happening, some packets which don't even belong to the call really, can fail the test. It's got some silly bugs like case-sensitive comparing of the headers.
SIPr/sipper
Ruby-based solution where you have to write your own scenarios in Ruby. It's got its own SIP stack and lots of tests. While it's generally good and handles a lot of complex scenarios nicely, its design is terrible. Bugs are hard to track and after a week I had >10 patches that I needed just to make it do basic stuff. Later I learned that some of the scenarios are just written in a different way, but SIPr developers were not really responsive and it took a lot of time to find it out. Synchronising actions of many agents if a hard problem, since they'd rather use an event-based, but still single-threaded version... it just makes you concentrate too much on "what order can this happen in and do I handle it correctly", rather than writing the actual test.
WinSIP
Commercial solution. Never tested it properly since the basic functionality is missing from the evaluation version and it's hard to spend that much money on something you're not sure works...
SipUnit
Java-based solution reusing Jain-SIP stack. It can do almost any scenario and is fairly good. It tries to make everything non-blocking / action based leading to the same problems SIPr has, but in this case it's trivial to make it parallel / threaded. It has its own share of bugs, so not everything works well in the vanilla package, but most of the stuff is patchable. The developers seem to be busy with other projects, so it's not updated for a long time. If you need transfers, presence, dialog-info, custom messages, RTP handling, etc. - you'll have to write your own modifications to support them. It is not good for performance testing.
If you're a Java-hater like me, it can be used in a simple way from Jython, JRuby or any other JVM language.
In the end, I chose SIPunit as the least broken/evil/unusable solution. It is by no means perfect, but... it works in most cases. If I was doing the project once again with all this knowledge, I'd probably reuse SIPp configurations and try to write my own, sane solution that uses proper threading - but that's at least a ½ year project for one person, to make it good enough for production.

Check out SIPp at SourceForge. It has many different scenarios for testing which the UAS mode (server) would probably be interesting for you and seems to allow INVITE, BYE, etc.

Try SIPInspector. It is a JAVA based utility to re-create different SIP signaling scenarios. It can play RTP and stress test your system too. Since written in JAVA it is highly portable and works on different oeprating systems. Way easier to user than SIPp.

What do you want to test apart from if the call gets through? Can't you simply call device B from device A and see if you can talk through the connection? If you want to have a look at the packets being sent you should look into wireshark.

Related

Deciding between TCP connection V/s web socket [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are developing a browser extension which would send all the URLs visited by a logged in user to backend APIs to be persisted.
Now as number of requests send to backend API would be huge and hence we are confused between if we create a persistent connection via websocket OR do it via TCP connection i.e. using HTTP rest API calls.
The data post to backend API doesn't need to be real time as we anyway would be using that data in our models which doesn't demand them to be real time.
We are inclined towards HTTP rest API calls as due to below reasons
Easy to implement
Easy to scale(using auto-scaling techniques)
Everyone in the team is already comfortable with the rest APIs
But at the same time cons would be
On the scale where we would have a lot of post requests going to server not sure it would be optimised
Feels like websockets can give us an optimised infrastructure :(
I would love if I can hear from community if we can have any pitfalls going with rest API calls option.
So first of all TCP is the transport layer. It is not possible to use raw TCP, you have to create some protocol on top of it. You have to give meaning to the stream of data.
REST or HTTP or even WebSockets will never be as efficient as customly designed protocol on top of raw TCP (or even UDP). However the gain may not be as spectacular as one may think. I've actually done such transition once and we've experienced only few percent of performance gain. And it was neither easy to do correctly nor easy to maintain. Of course YMMV.
Why is that? Well, the reason is that HTTP is already quite highly optimized. First of all you have "keep alive" header that keeps the connection open if it is used. And so the default HTTP mechanisms already persists connection when used. Secondly HTTP handles body compression by default, and with HTTP/2 it also handles headers compression. With HTTP/3 you even have more efficient TLS usage and better support in case of unstable network (e.g. mobile).
Another thing is that since you do not require real time data then you can do buffering. So you don't send data each time it is available, but you gather it for say few seconds, or minutes or maybe even hours, and send it all in one go. With such approach the difference between HTTP and custom protocol will be even less noticable.
All in all: I advice you start with the simplest solution there is, in your case it seems to be REST. Design your code so that transition to other protocol is as simple as possible. Optimize later if needed. Always measure.
Btw, there are lots of valid privacy and security concerns around your extension. For example I'm quite surprised that you didn't mention TLS at all. Which matters, not only because of security, but also because of performance: establishing TLS connections is not free (although once established, encryption does not affect performance much).
Putting my discomfort aside (privacy, anyone?)...
Assuming your extension collates the Information, you might consider "pushing" to the server every time the browser starts / quits and then once again every hour or so (users hardly ever quite their browsers these days)... this would make REST much more logical.
If you aren't collating the information on the client side, you might prefer a WebSocket implementation that pushes data in real time.
However, whatever you decide, you would also want to decouple the API from the transmission layer.
This means that (ignoring authentication paradigms) the WebSockets and REST implementations would look largely the same and be routed to the same function that contains the actual business logic... a function you could also call from a script or from the terminal. The network layer details should be irrelevant as far as the API implementation is concerned.
As a last note: I would never knowingly install an extension that collects so much data on me. Especially since URLs often contain private information (used for REST API routing). Please reconsider if you want to take part in creating such a product... they cannot violate our privacy if we don't build the tools that make it possible.

What is ZeroMQ underlying design architecture

I am comparatively new to ZeroMQ and would like some suggestions regarding its it's internal architecture.
I am planning to use ZeroMQ as a messaging framework for my work. The basic idea what I want to achieve is to be able to dynamically scale the infrastructure based on the load and computational capacity required to achieve a particular workflow deadlines.
So,if if there is a necessity to add more nodes, then the application spawns new nodes and the messaging framework should be able to incorporate the changes as well. I should also be able to be point where the additional computations should occur or how the framework dynamically adds the new nodes (if any). The event on a particular node decides subsequent actions to be performed on other nodes. Here is my scenario or my stack that I am thinking off, but wanted to know if it makes sense:
User applications
ZeroMQ messaging
Squid-Content based routing
Overlay
Physical Substrate
I am bit skeptical about the above stack as I believe ZeroMQ helps one to achieve most of the functionality and therby thereby making it simpler.
Few points about my stack:
Physical substrate are the total number of nodes that are available for the computations or as data sources.
Overlay is a logical network that is built dynamically upon the physical network based upon the closest nodes available for a particular workflow. i.e. if two nodes exchange data frequently, then those two nodes are placed logically close to one another. Is a separate overlay like CHORD etc required when we use ZeroMQ?
Squid is basically used for content based routing. Is Squid required when we use ZeroMQ?
ZeroMQ messaging is for the communication between different nodes for an application.
Basically, what I wanted to know is whether above stack can be made simpler given that ZeroMQ has richer functionalities. If so, can someone point or share the thoughts. I am however going through the documentations of ZeroMQ, I am finding it a bit difficult to understand the intrinsic design of ZeroMQ. Please help.
Thanks
There's so much specific to your use-case here that it's almost impossible to give any definite answers. ZeroMQ is not a direct replacement for the concepts you've built into your architecture, however it may meet the goals you're trying to meet depending on how you're using them.
My suggestion would be to put your current architecture aside and start trying to build up a new one with ZMQ as its core, and see where you run into limitations that are solved by the other parts of your stack.
As for the "intrinsic design" of ZMQ, here's the basics that you need to understand as a starting point:
A ZMQ socket handles connection details for you, including managing network hiccups - but this has limits that you'll need to know
There are different kinds of ZMQ sockets, and they have opinions about how you use them. Some of them communicate asynchronously, some of them are strictly synchronous, some are one way, some are bi-directional.
If a connection between two sockets is severed (e.g. one node goes down, there is a network failure - something more than a momentary hiccup), it's your job to recognize that and re-establish that connection
There is no built in brokering or topology, you have to design and build that all yourself.
... ultimately, ZMQ provides a toolset for you to build a messaging framework, it does not provide a fully realized messaging framework out of the box. So, yes, it has the power to replace some of the other tools you're currently using, but you'll have to build it.

Experiences with (free) embedded TCP / IP stacks? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does anyone have especially good (or bad) experiences with any of the following embedded TCP / IP stacks?
uIP
lwIP
Bentham's TCP/IP Lean implementation
The TCP/IP stack from this book
My needs are for a solid, easy-to-port stack. Code size isn't terribly important, performance is relatively important, but ease of use & porting is very important.
The system will probably use an RTOS, that hasn't been decided, but in my experience most stacks can be used with or without an RTOS. Most likely the platform will be an ARM variant (ARM7 or CM3 in all likelihood).
Not too concerned about bolting the stack to the Ethernet driver, so that isn't a big priority in the selection.
I'm not terribly interested in extracting a stack out of an OS, such as Linux, RTEMS, etc.
I'm also not interested in commercial offerings such as Interniche, Micrium, etc...
The stack doesn't need all sorts of bells & whistles, doesn't need IPv6, and I don't need any stuff on top of it (web servers, FTP servers, etc..) In fact it's possible that I'll only use UDP, although I can envision a couple scenarios where TCP would be preferable.
Experiences with other stacks I've missed are of course also very much of interest.
Thanks for your time & input.
I've used both uIP and lwIP extensively.
uIP
Great if youre only wanting something basic like a bootloader
Small footprint.
Uses polling so we've never got over 3kbit with it :-(
No DHCP 'out of the box'
Poor UDP support
lwIP
Fully interrupt driven so much faster (~ x10)
Includes DHCP with failover AutoIP
UDP with multicast
Plus more
EDIT:
And we've never used either with an RTOS as there has never been a need.
+1 for lwIP.
We used this successfully on a project a few years back and found it to be generally very reliable. We found and fixed a few issues (generally corner cases within the TCP code) which we submitted back to the project, and even though the project has moved on quite a bit since then we didn't generally find it lacking in any features.
As you suspect it will work with or without an RTOS. It took about a week to get running on our system with an RTOS, which included changes we had to make to support an unusual DSP compiler. As you're probably using GCC on ARM you can avoid any of that effort.
It does contain many more features than you require, but if your requirements change a few years down the line then you'll be better off having started out with a more substantial stack.
lwIP
I worked on a project with a 3G modem where we needed a UDP/IP stack (no TCP) on top of PPP. We narrowed down to uIP and lwIP. We picked lwIP in the end because it had PPP already (uIP doesn't), and we had enough RAM to spare.
Our particular project didn't use an RTOS, and lwIP was fine to use without an RTOS.
I wasn't directly involved in porting the lwIP code, although I worked on the modem driver to interface with it. My impression was that the porting took a couple of weeks to get everything going smoothly, for our engineer who had previous TCP/IP experience. The lwIP code has been hacked by many people, and consequently has some rough edges (e.g. someone threw in a lone malloc() somewhere) but it worked for us after a little tweaking. We tested it with an independent validation suite.
In summary, it was "suitably functional" for our UDP/IP and PPP needs (but I can't comment on its TCP capabilities).
+1 for lwIP.
It is included in the Luminary Micros (now TI) Serial to Ethernet reference design with some added capabilities (some sort of "server side scripting" and cgi) working on bare metal (without RTOS).
It is rock solid and very performant with only 32KB or RAM.
Regards
PFM
I am pleased with lwip on the Stellaris Cortex-M3.
StellarisWare for the LM3S6965 eval board includes the enet_lwip demo. This is a small web server running over lwip which is running over bare metal -- no FreeRTOS in this case. The system is driven by the timer and Ethernet interrupts. It was pretty easy to rip out the web server and drop in my app. I did not have to become an lwip expert to get this running the first time.
Later I realized that my app was intrinsically up-call driven. At first, it had a sockets-to-upcall gasket. I replaced that layer with a much simpler one that translates lwip native upcalls to the app's upcalls, and optioned out lwip's socket API. This saved more flash and RAM space, and made the whole thing faster and simpler. With a little tweaking I got it running on the S2E board using 52K flash and 30K RAM.
You can try the open-source FNET TCP/IP stack.
I've used the Microchip TCP/IP stack. I've been very happy with it. It was very easy to implement, lots of demo code/tutorials available, and has support for a lot of protocols HTTP, tFTP, SMTP, SNTP, etc. A point that doesn't match your requirements however is that it is not easily portable to another architecture. In fact I think the license for the stack explicitly forbids this because Microchip wants you to run the stack only using their hardware the PIC18, PIC24, and PIC32. There is however an external Ethernet controller they sell that they will allow you to use certain portions of this stack with their ENC28J60.
I have used Interniche on FreeRTOS.
It's a full-fledged stack and supports quite a few features.
Since you are looking for a non-commercial version, my vote is on lwIP.

Designing an application protocol [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have an existing standalone application which is going to be extended by a 3rd-party, using a network protocol. The capabilities are already implemented, all I need is to expose them to the outside.
Assuming the transport protocol is already chosen (UDP), are there any resources that will help me to design my application protocol?
There seems to be a lot of information about software design, but not on protocol design.
I've already looked at Application Protocol Design.
See Jabber protocols design guidelines and RFC 4101. Although it is aimed at making RFCs more easy to understand to reviewers, this RFC provides some interesting advices.
Have you looked at Google Protocol Buffer? It seems like a good way to resolve this issue.
You can create an endpoint that communicates with your existing app and then responds from 'outside' using the protobuffer protocol. It's binary, so it's tiny and fast and you don't have to write your own protocol manager, 'cause you can use the Google ones. The downside is that it has to be implemented on both sides of the system (on your 'server' side and on the consumer/client side).
Another recommendation for protocol buffers - nice tight binary with little effort. Note, however, that while the binary protocol is well defined, there isn't yet an agreed RPC standard (several are in progress, tending to lean towards TCP or HTTP).
The spec makes it very easy to have the client and server in different architectures, which is good - plus it is extensible.
Caveat: I'm the author of one of the .NET versions, so I may well be biased ;-p
First off, UDP is primarily a one-way broadcast transport method. Also, it is potentially lossy, so you need to be able handle missing packets and out-of-order packets. If you need any level of reliability from UDP, or require two-way connections, you will end up needing just about everything from TCP, so you might as well go with that to start with and let the network stack take care of it.
Next up, if your data is potentially larger than a single IP packet then you will need some way of identifying the start and end of each packet, and a means of handling illegal or corrupt packets. I would recommend some kind of header with packet length, some kind of footer, and maybe a checksum.
Then you need some way of encoding the messages and responses. There are many RPC protocols around. You could look at SOAP, or design a custom XML-based protocol, or a binary one.
You should really think hard about whether you really want to design, document and maintain your own protocol or use something that is already existing. It is probable there is already a documented protocol that matches your needs. Depending on what you are doing it will probably look overkill at first and implementing all the spec will look tedious and a lot less fun than writing your own but if you intend for your application to still be actively developed in a few years it should save you a lot of time and money to use something that already exist and is known by third parties. Besides, if you can use an existing library for that protocol, the implementation part should be a lot faster.
Designing new protocol is more fun than implementing one but less than maintaining one as you have to live with all the defects. No protocol is perfect but if you have never designed one you can be assured you will make more mistake designing it than the people who designed the existing well known protocol you could use instead.
In short, leverage what already exists whenever possible.
If you're choosing XML keep in mind that you will have a giant overhead of markup.
A simple binary protocol will also be need not so much ressources to parse compared to xml.
If you do not want to build your protocol from ground up, you should take a look at SOAP. Support varies for different programming languages, but cross language communication is explicitly encouraged.
Unfortunately UDP and SOAP seem to have stuck in its infancy, HTTP is most commonly used.
I have an existing standalone application which is going to be extended by a 3rd-party, using a network protocol.
It would help to know a little more about what your program does and what the nature of these 3rd party extensions are. Maybe some rationale for using UDP?

Why isn't bittorrent more widespread? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I suppose this question is a variation on a theme, but different.
Torrents will never replace HTTP, or even FTP download options. This said, why aren't there torrent links next to those options on more websites?
I'm imagining a web-system whereby downloaded files are able to be downloaded via HTTP, say from http://example.com/downloads/files/myFile.tar.bz2, torrents can be cheaply autogenerated and stored in /downloads/torrents/myFile.tar.bz2.torrent, and the tracker might be /downloads/tracker/.
Trackers are a well defined problem, and not incredibly difficult to implement, and there are many drop in place alternatives out there already. I imagine it wouldn't be difficult to customise one to do what is needed here.
The autogenerated torrent file can include the normal HTTP server as a permanent seed, the extensions to do this are very well supported by most, if not all, of the major torrent clients and requires no reconfiguration or special things on the server end (it uses stock standard HTTP Range headers).
Personally, if I setup such a system, I would then speed limit the /downloads/files/ directory to something reasonable, say maybe 40-50kb/s, depending on what exactly you were trying to serve.
Does such a file delivery system exist? Would you use it if it did: for your personal, company, or other website?
first of all: http://torrent.ubuntu.com/ for torrents on ubuntu.
second of all: opera has a built in torrent client.
third: I agree with the stigma attached to p2p. So much so that we have sites that need to be called legaltorrents and such like because by default a torrent would be an illegal thing, and let us not kid ourselves, it is.
getting torrents into the main stream is an excellent idea. you can't tamper with the files you are seeding so there is no risk there.
the big reason is not really stigma. the big reason is analytics, and their protection. with torrents these people (companies like microsoft and such like) would not be able to gather important information about who is doing the downloads (not personally identifiable information, and quickly aggregated away). with torrents, other people would be able to see this information, at least partially. A company would love to seed the torrent of an evaluation version of a competing companys product, just to get an idea of how popular it is and where it is getting downloaded from. It is not as good as hosting the download on your webservers, but it is the next best thing.
this is possibly the reason why the vista download on microsofts sites, or its many service packs and SDKs are not in torrents.
Another thing is that people just wont participate, and that is not difficult to figure out why because of the number of hoops you have to jump through. you got to figure out the firewall, the NAT thing, and then uPNP thing, and then maybe your ISP is throttling your bandwidth, and so on.
Again, I would (and I do) seed my 1.5 times or beyond for the torrents that I download, but that is because these are linux, openoffice that sort of thing. I would probably feel funny seeding adobe acrobat, or some evaluation version or something, because those guys are making profits and I am not a fool to save money for them. Let them pay for http downloads.
edit: (based on the comment by monoxide)
For the freeware out there and for SF.net downloads, their problem is that they cannot rely on seeders and will need their fallback on mirrors anyway, so for them torrents is adding to their expense. One more reason that pops to mind is that even in software shops, Internet access is now thoroughly controlled, and ports on which torrents rely plus the upload requirement is absolutely no-no. Since most people who need these sites and their downloads are in these kinds of offices, they will continue to use http.
BUT even that is not the answer. These people have in their licensing terms restrictions on redistribution. And so their problem is this: if you are seeding their software you are redistributing it. That is a violation of their licensing terms so if they host a torrent download and allow you to seed it, that is entrapment and they can be sued (I am not a lawyer, I learn from watching TV). They have to then delicately change their licensing to allow distribution by seeding torrents but not otherwise. This is an easy enough concept for most of us, but the vagaries of the English language and the dumb hard look on the face of the judge make it a very tricky thing to do. The judge may personally understand torrents, but sitting up their in the court he has to frown and pretend not to because it is not documented in legalese.
That there is the ditch they have dug and there they fall into it. Let us laugh at them and their misery. Yesterdays smart is todays stupid.
Cheers!
I'm wondering if part of it is the stigma associated with torrents. The only software that I see providing torrent links are Linux distros, and not all of them (for example, the Ubuntu website does not provide torrents to download Ubuntu). However, if I said I was going to torrent something, most people associate it with illegal downloads (music, video, TV shows, etc).
I think this might come from the top. An engineer might propose using a torrent system to provide downloads, yet management shudders when they hear the word "torrent".
That said, I would indeed use such a system. Although I doubt I would be able to seed at home (I found that the bandwidth kills the connection for everyone else in the house). However, at school, I probably would not only use such a system, but seed for it as well.
Another problem, as mentioned in the other question, is that torrent software is not built into browsers. Until it is, you won't see widespread use of it.
Kontiki (which is very similar to bittorrent), makes up about 10% of all internet traffic by volume in the UK, and is exclusively used for legal distribution of "big media" content.
There are people who won't install a torrent client because they don't want the RIAA sending them extortion letters and running up legal fees in court when they (the RIAA) break into your computer and see MP3 files that are completely legal backup copies of CDs that were legally purchased.
There's a lot of fear about torrents out there and I'm not comfortable with any of the clients that would allow even limited access to my PC because that's the "camel's nose in the tent".
The other posters are correct. There is a huge stigmata against Torrent files in general due to their use by hackers and people who violate copyright law. Look at PirateBay, that is all they "serve" are torrent files. A lot of cable companies in the US have started traffic shaping Torrent traffic on their networks as well because it is such a bandwidth hog.
Remember that torrents are not a download accellerator. They are meant to offload someone who cannot afford (or maybe just doesn't desire) to pay for all the bandwidth themselves. The users who are seeding take the majority of the load. No one seeding? You get no files.
The torrent protocol is also horrible for being so darn chatty. As much as 40% of your communications on the wire can be control flow messages and chat between clients asking for pieces. This is why cable companies hate it so much. There are some other problems of the torrent end game (where it asks a lot of people for final parts in an attempt to complete the torrent but can sometimes end up with 0 available parts so you are stuck with 99% and seeding for everyone).
http is also pretty well established and can be traffic shaped for load balancers, etc. So most legit companies that serve up their content can afford to host it, or use someone like Akamai to repeat the data and then load balance.
Perhaps its the ubiquity of http-enabled browsers, you don't see so much FTP download links anymore, so that could be the biggest factor (ease of use for the end-user).
Still, I think torrent downloads are a valid alternative, even if they won't be the primary download.
I even suggested Sourceforge auto-generate torrents for downloads, and they agreed it was a good idea.. but havn't implemented it (yet). Here's hoping they will.
Something like this actually exists at speeddemosarchive.com.
The server hosts a Metroid Prime speedrun and provides a permanent seed for it.
I think that it's a very clever idea.
Contrary to your idea, you don't need an HTTP URL.
I think one of the reasons is that (currently) torrent links are not fully supported inside web browser... you have to fire up the torrent client and so on.
Maybe is time for a little firefox extension/plugin? Damn, now I am at work! :)

Resources