FreeSWITCH minimal installation and module selection - asterisk

As someone who is very new to the opensource PBX projects like Asterisk and FreeSWITCH, I am grappling with some information overload. Have read the basic FreeSWITCH docs on Wiki, but still have few questions. Since I am not very familiar with the terminology, I will try to use close approximations.
Trying to create a small/minimalistic build of FreeSWITCH, that needs to run on an rather old laptop (Celeron 1GHz, 512MB RAM, 20GB HDD, already running Debian "Wheezy"), and set it up as a 6-port GSM-SIP/Jabber gateway. So, by "small" and "minimalistic", I mean one which doesn't have modules/optional-software that is not absolutely necessary (e.g. no need for IVR announcements, or Skype integration) -- to keep memory footprint smallest, and occupy less hard-disk real-estate.
The rough idea is to have 6 GSM ports (via 'GSM-open module', similar to chan_dongle) towards public telephony network, and about 60 SIP extension, and support upto 6 calls involving GSM ports, and about 6 SIP-SIP calls (intra PBX), on this setup. I have read that the CPU overhead of GSMopen module is pretty low, so I am guessing this is possible.
Can someone confirm this to be a realistic goal?
What might be the minimum set of modules to select for minimalistic build?
For modules not chosen during initial build, can those be added later? If so, would it require me to rebuild FreeSWITCH completely, only the modules, or that everything would be built, but only configuration changes would be required to ensure that modules are loaded, and configure?
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?

Can someone confirm this to be a realistic goal?
Yes, this is quite realistic. You need to target as little as possible transcoding, because that's where CPU resources are needed. But even with a 1Ghz Celeron, 6 transcoded sessions seem quite realistic. But it needs testing :)
What might be the minimum set of modules to select for minimalistic build?
Just start with the default list of modules, and add gsmopen (I have no experience with gsm gateways, can't help with that part). The memory footprint is pretty low, and you may need some of those modules later.
For modules not chosen during initial build, can those be added later?
as far as I remember, Wiki describes this process. You edit modules.conf and make the specific module.
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
It really depends on complexity of your dialplan. Each context consists of a number of conditions, which are doing regexp match on channel variables. So, the more complex your dialplan is, the less CPS you get. But for a 6-channel gateway, I don't see this a problem. GSM network will be much slower than your box :)
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
You can control every aspect of FreeSWITCH behavior with FreeSWITCH. There are even examples when the complete dialplan is re-implemented by an external program (Kazoo does that).
The simplest mode of operation is when your Lua/JS/Perl/Python script is launched from within the dialplan: then it receives a "session" object, and you can do whatever you want with the call: play sounds, bridge, forward, make a new call and bridge them together, and so on. Here in my blog there's a little practical example.
Then, you can build an external application which connects to the FS socket and monitors the events and performs actions on active calls.
Also, it can be done in the opposite direction: you run a server, and FS connects to it with its socket library.
Also, you can have an HTTP service which delivers pieces of XML configuration to FreeSWITCH, and it requests those on every call (this would be the most CPU-intensive application). This way, you can feed FS from some internal database, and build fault-tolerant systems.
I hope this helps :)
You can also find me in skype if needed.

FreeSWITCH is not really memory-hungry, and you can simply start with the default set of modules (the best is to use the prebuilt Debian packages). For example, on my 64bit machine, the FreeSWIITH process occupies only 35MB of memory.
freeswitch#vx03:~$ uname -a
Linux vx03 2.6.32-5-xen-amd64 #1 SMP Thu Nov 3 05:42:31 UTC 2011 x86_64 GNU/Linux
freeswitch#vx03:~$ ps -p 11873 v
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
11873 ? S<l 10:29 0 0 258136 36852 2.3 /opt/freeswitch/bin/freeswitch -nc -rp -nonat -u freeswitch -g freeswitch
I will go through the rest of your questions later today

Related

How to find i/o bottleneck within asp.net app

We got a high traffic website which generates a lot of I/O. Within 10 minutes it has been reading over 10 gb of data (w3wp in question seen in task manager). For memory and application hangs I have been using WinDbg with success. But I don't know how I can find the object(s) / method(s) within a process which are responsible for the highest I/O.
Is this even possible?
Edit
The question is: Is there a way to profile I/O operations in a .NET assembly, say: list of threads sorted by highest disk I/O (or something similar that would help me where to look)
ANTS Performance Profiler
I have used this tool to great success - dealing with finding the specific instructions which are causing ~512GB of memory on a high-volume web farm getting chewed up within 5-10 minutes. Sounds like a very similar situation as yours.
Now, to be realistic - it's not going to magically solve your problem. It still requires a lot of setup, thorough analysis and detective work. But this tool definitely took the problem from "practically unsolvable" to "solvable within days".
Update:
As I mentioned in the comments (and Ben Emmett echoed), we can use ANTS to monitor memory, file system handles - pretty much any resource consumption and drill down the call stack to see the effects of specific routines.
I came up with this tool AppDynamics Lite which displays your application calls costs and performance in a visual way. It might help you to find out which functions are making the most costy IO operations.
Quoting;
Understand the health of your CLR with key metrics like response time, throughput, exception rate, and garbage collection time as well as key system resource like CPU, memory and disk I/O.
Worth giving a shot as it is trial/free for 30 days. Hope it helps.
Ps: I'm not affiliated with AppDynamics in any way.
You can use the (free) Windows Performance Toolkit from Windows 8 which does run also on Windows Vista and later. There you can turn on system wide profiling to see what was going on in all processes at once. No instrumentation necessary. Only one reboot is required to set an arcane registry key which is done by WPRUI.exe automatically.
With XPerf you could enable IO Init stack walking so that a call stack is taken for every IO which is started. The only issue is that the stacks will be broken for 64 bit processes which means that you will see only the first method above the BCL methods of your code because there is a Windows 7 bug in the stackwalking capabilities of the OS.
A workaround is to Ngen your assemblies or move to Server 2012 or switch to x86 for profiling to see deeper call stacks.
You will see all file IO and CPU activity even without any call stacks and the file names along how long the hard disc was used. That should give you good information which part of your app is causing the disc IO. From the partial call stacks you should be able to pinpoint your issue even without full stacks.
The tool will give you much more insight than any commercially available profiler at the expense that you need to learn how to use it. Since the call stacks do not end at your code or in user mode but in the kernel you can also determine if e.g. the virus scanner is causing significant IO delays. But you need to know how your processor does work. This toolset was originally aimed at kernel devs which explains why you see so many useless columns.
In the picture below you see file IO and CPU consumption stacked. When you select your high IO file in the disc IO graph it will highlight in the CPU consumption all related call stacks which were taken at the same time while the IO was active. This way you can diretly navigate from the IO to your potentially blocked threads.

Web server tolerance to high client poll rate: Cowboy vs. Yaws web servers

I have been building a real-time notification system. It’s part of a web application, but events have to be seen as soon as they occur. Long polling was not an option because it would be expensive for the web server to hold on to connections when no events are available, so I had to go for short-lived polls.
Each client hits the web server every, say, 2 seconds (this is a fairly high rate). When events are available, they are sent as JSON to the JavaScript client. Now, this requires a server set-up to handle a high number of short-lived connections. I have implemented one such system using the Yaws web server. However, because Yaws starts quite a number of many other services, it feels heavy and connections begin to get either refused or aborted when they go beyond 30,000 (maybe because I am running some ETS Tables in the same Erlang VM as Yaws is running on [separating these may require rpc:call/4, which—I fear—will increase latency]). I know that there are operating-system-specific tweaks to do, and those have been done.
This would not be a problem if it was easy to cluster up several Yaws instances. In Yaws, i am using a few appmods, and I am doing things RESTfully. I was thinking that the Cowboy web server might enhance things a bit here. I have not used Cowboy before, but I have used Misultin. Looking at Cowboy, it is a full fledged OTP Application and it seems to be easy to cluster, and being lightweight, may perhaps increase on the number of clients the overall system can handle. Storage is on Mnesia, which I can distribute easily to add more nodes (maybe by replication), so that there is a Cowboy instance in front of every Mnesia instance.
My questions are:
Is my speculation correct, that if I switched from Yaws to Cowboy, I might increase the performance significantly?
Yaws has a clean API via Appmods and the #arg{} record. Does Cowboy have an equivalent of these two things (illustrate please)?
Can Cowboy handle file uploads? If so, which server (Yaws or Cowboy), in your opinion would be better to use in the case of frequent file uploads? Illustrate how file uploads are done with Cowboy.
It is possible to run several Yaws instances on the same machine. Do you think that creating many Yaws instances per server (physical box) and having the client-load distributed across these would help? What do I need to know about doing this?
When I set the yaws.conf parameter max_connections = nolimit, how would I specify the same in Cowboy?
Now, I followed the interview with Cowboy author and he discusses the reasons why Cowboy is more lightweight than Yaws. He says that
The biggest difference is the use of binaries instead of lists. The generic acceptor pool is another. I could list a lot of other small differences but I figure these aren’t the most interesting.
That because Cowboy uses the listener-pool library Ranch, it somehow ends up with a higher capability of handling more connections, plus the use of binaries and not lists.
Another quote from the same interview:
Since we use one process per connection instead of two, and we use binaries instead of lists, we end up using a lot less memory than other projects without user intervention. Cowboy is also lazy, it doesn’t do anything unless required. So we don’t have much in memory until the user starts calling functions.
I wonder how yaws handles this case. Somehow, my problem domain needs lightweight HTTP handling. It’s actually true that Yaws will lead to more memory consumption as compared to say, Mochiweb, Misultin or Cowboy. My greatest concern is that Yaws has the best/cleanest API whereby it gives us access to the #arg{} containing everything we need as an Erlang record, so that we can get them out ourselves, than the others which have numerous functions for extracting stuff outside. Even the documentation: Yaws docs are pretty good and straightforward. Perhaps I need to look at more Cowboy code for things like file uploading and simple GET and POST request handling.
Otherwise, the questions I asked earlier, remain as pressing concerns. Yaws is pretty good, but seems to be overkill for this fast light-weight short-lived high rate poll situation, what do you think?
Your 30000 refusal limit sounds an awful lot like a 32k limit somewhere. Either the default process count, which is 32k, or some system limit on file descriptors and so on. You should not rule out the possibility that the limitation is on the kernel side of things. I've seen systems come to their limits quite easily due to kernel configurations which can be really hard to handle.

MPI on a multicore machine

My situation is quite simple: I want to run a MPI-enabled software on a single multiprocessor/core machine, let's say 8.
My implementation of MPI is MPICH2.
As I understand I have a few options:
$ mpiexec -n 8 my_software
$ mpiexec -n 8 -hosts {localhost:8} my_software
or I could also specify Hydra to "fork" and not "ssh";
$ mpiexec -n 8 -launcher fork my_software
Could you tell me if there will be any differences or if the behavior will be the same ?
Of course as all my nodes will be on the same machine I don't want "message passing" to be done through the network (even the local loop) but through shared memory. As I understood MPI will figure that out itself and that will be the case for all the three options.
Simple answer:
All methods should lead to the same performance. You'll have 8 processes running on the cores and using shared memory.
Technical answer:
"fork" has the advantage of compatibility, on systems where rsh/ssh process spawning would be a problem. But can, I guess, only start processes locally.
At the end (unless MPI is weirdly configured) all processes on the same CPU will end up using "shared memory", and the launcher or the host specification method should not matter for this. The communication method is handled by another parameter (-channel ?).
Specific syntax of host specification method can permit to bind processes to a specific CPU core, then you might have slightly better/worse performance depending of your application.
If you've got everything set up correctly then I don't see that your program's behaviour will depend on how you launch it, unless that is it fails to launch under one or other of the options. (Which would mean that you didn't have everything set up correctly in the first place.)
If memory serves me well the way in which message passing is implemented depends on the MPI device(s) you use. It used to be that you would use the mpi ch_shmem device. This managed the passing of messages between processes but it did use buffer space and messages were sent to and from this space. So message passing was done, but at memory bus speed.
I write in the past tense because it's a while since I was that close to the hardware that I knew (or, frankly, cared) about low-level implementation details and more modern MPI installations might be a bit, or a lot, more sophisticated. I'll be surprised, and pleased, to learn that any modern MPI installation does, in fact, replace message-passing with shared memory read/write on a multicore/multiprocessor machine. I'll be surprised because it would require translating message-passing into shared memory access and I'm not sure that that is easy (or easy enough to be feasible) for the whole of MPI. I think it's far more likely that current implementations still rely on message-passing across the memory bus through some buffer area. But, as I state, that's only my best guess and I'm often wrong on these matters.

How to fork interactive programs

I have an interactive program with a high start-up cost. After start-up, I'd like to fork the process into separate concurrent sessions. Ideally each separate session would become a GNU screen window but being able to individually telnet/ssh to each session would be fine too.
It shouldn't be too hard to write this from scratch but it seems like something that should have been done/considered before and maybe there are reasons why this is a bad idea...
I know that an alternative approach is to use shared memory for the data that's expensive to initialize. The reason I'm reluctant to go down that path is that the shared data uses C++ data structures with pointers, which makes it hard to mmap it into an unrelated process.
This is what any database does - the startup is phenomenally expensive, but the db provides several different means of connecting - example Oracle's BEQ protocol.
Telnet has issues, consider ssh. Either way, consider a daemon that answers requests for connect on a port (you would use AF_UNIX sockets I guess), then creates a separate session.
Stevens Advanced Programming in the UNIX Envrionment or Rochkind's Advanced UNIX Programming have discussions and complete examples. Since my Stevens book seems to have gone on extensive holiday, see Rochkind 4.3 and 4.10.
And no, there is no pending doom for using this approach.

Serial Comms dies in WinXP

A bit of history: We have an application, which was originally written many years ago (1998 is the first date in PVCS but the app is about 5 years older than that as it originally was a DOS program). This application communicates with a piece of hardware via serial. When we got to Windows XP we started receiving reports of the app dying after a short time of running. It seems that the serial comms just 'died' and the app was left in a stuck state. The only way to recover from this situation was to restart the application.
The only information I can find regarding this problem was apparently the Windows Message system would miss that information was received, the buffer would fill and the system would get stuck. This snippet of information was left in a old word document, but there's no evidence to back this up. It also mentions that this is only prevalent at high baud rates (115200+).
The solution was to provide customers with USB->Serial converters along with the hardware.
Today: We are working on a new version of the hardware that will run across a network as well as serial ports. So to allow me to work on the network code, minus the actual hardware we are using a VSCOM NetCom113 device. It also installs a virtual comm port on the users (ie: mine) machine.
Now I have got the network code integrated with the app, it appears that the NetCom device exhibits the same behaviour as a physical commport. This is undesirable as I need the app to run longer than ~30 seconds.
Google turns up zero problems that we experience.
I was wondering:
Has anyone experienced this before? If so what did you do to fix/workaround the problem?
Does anyone have any suggestions as to whether the original author of the document is correct and what I can do to test the theory?
Unfortunately I can't post code as the serial code is tightly couple with the rest of the system, though if you have questions regarding it I can answer questions about it.
Updates:
The code is written using Win32 Comm routines - so I am using CreateFile, ReadFile. There's also judicious calls to GetOverlappedResult.
It's not hanging per se, it's just that the comms stops. You can access the menus, click the buttons, but nothing can interact with the connected hardware. Using realterm you can see that no data is coming in or going out.
I think the reference to the windows message is that the problem is internal to windows. Data has arrived but the kernal has missed it and thus not told the rest of the system about it.
Flow control is not used.
Writing a 'simple' test is difficult due the the fact that the code is tightly coupled and the underlying protocol is quite complex and would require a lot of work.
Are you using DOS-style serial code, or the Win32 CreateFile approach?
If the former, be very suspicious: if at all possible I'd convert to the latter.
If the latter, do you know on what kind of system call it's hanging? Are you in a blocking read call? or an overlapped I/O call? or waiting on an event? (I'm not sure I have enough experience to help, but those are the kinds of questions that come to mind)
You might also check into the queue size, which you can set with the SetupComm function.
I don't buy the "Windows Message system" stuff -- it sounds fishy; you can write good Win32 serial i/o code that never uses Windows messages.
edit: does your Overlapped I/O use events? I seem to remember something about auto-reset events occasionally missing their trigger... check your overlapped I/O calls very carefully to see whether you're handling the possible outcomes properly. Perhaps there's a way to make your code more robust by automatically cancelling the overlapped i/o and restarting another read. (I assume the problem is in the read half, not the write half?)
edit 2: A suggestion: assuming the win32 side has missed a byte or packet, and your devices are in deadlock because they're both expecting each other to respond to something, can you tweak the other side of the serial I/O to regularly send some type of "ping" packet with an incrementing counter? (and log the ping packets on the PC side; that way you can see whether you've missed any)
Are you sure you have your flow control set up correctly? DTR, RTS, etc...
-Adam
i have written apps that use usb / bluetooth serial ports and have never had an issue. with bluetooth i have seen bit rates (sustained) of 800,000 bps for long periods of time. most people don't properly implement the port.
My serial port
Not sure if this is a possibility for you, but if you could re-write the code using C#.NET you'd have access to the SerialPort class there. It might remedy your problem. I know a lot of legacy code based around the Win32 API for hardware I/O ports tended to fail in XP due to timing (had a small bit of experience with MIDI).
In addition, I don't know if you can use the Win32 method of Serial Port access in Vista, so that might shut out future MS OSes from being able to use your code.

Resources