I am currently evaluating the Asterisk Conference feature: in an enterprise environment, simple internal VoIP set up with SIP phones, outside users can join to conference from PSTN/GSM etc. I know there are a couple of options, such as Meetme, Conference, Konference and ConfBridge.
Before delving into each option and try it out, I would like to know where can I find the capacity of these different conference options?
- How many users can join one conference at the same time?
- How many concurrent conference can run at the same time on a server?
I know this will also be determined by the CPU processor and the available bandwidth. But just assume we have enough CPU power and bandwidth, is there some max limitation enforced by the server code?
I'm not aware of any hard limit on the number of conference participants. As you said, the capacity will be determined by the hardware of your server and your inbound call capacity (either in bandwidth or ISDN channels).
Assuming a modern server with reasonable specs, you should be good for several hundred potential participants without breaking a sweat, provided that your processor is not tied up doing echo cancellation in software or transcoding.
You can set maximum number of participants in a conference bridge. Limitation will probably depend on bandwidth. I did come across on another site, while looking for something else, where the MeetMe Limit is set to 128. I am not 100% how accurate, but it does shed some light on the subject.
For sure, you can set your own maximum user access on a conference bridge. Within your extensions.conf file:
[fancybridge]
type=bridge
max_members=20
mixing_interval=10
internal_sample_rate=auto
record_conference=yes
Reference: https://www.voip-info.org/wiki/view/Asterisk+cmd+ConfBridge+10
https://supportforums.cisco.com/discussion/10878981/meetme-limits
Side Note: It may be wise to set max members to a bridge for security reasons. Just a thought!
Related
I am currently looking at 1Gb/s download and 35 MB/s upload over coax. We are looking at setting up some VOIP services etc which will be impacted by such a low upload speed. How do I determine what the max bandwidth usage for the day was? I'm aware that netstat, netsh, and network monitor provide information regarding individual processes but I cannot find the data I need to determine whether upgrading to fiber would be marginally beneficial or entirely necessary. Any help would be greatly appreciated.
Netstat, netsh, performance monitor, network monitor
I can obtain the information regarding any connection in particular but i need something more akin to over all statistics so that i can make an informed decision regarding our network limitations ( fiber vs coax)....Do we need an additional 200 mb/s ? etc
Typical VOIP services only require a few kilobytes per second of upload bandwidth per phone call. Do you anticipate having many (hundreds of) concurrent phone calls which would add up to 35MBytes/s (or more likely 35Mbits/sec). As an aside, network bandwidth is typically expressed with big-M and little-b (e.g. Mb) to denote megabits per second.
I would suggest first using a utility like SolarWinds RealTime Bandwidth Monitor to look at your router/gateways utilization.
I am trying to make a simple general purpose multi-threaded async downloader in python.How many parallel connections can be generally be made to a server with minimum risk of being banned or rate limited.
I am aware that network will be a limiting in some cases but lets assume in this case that network isn't an issue in this case for the sake of discussion.I/O is also done asynchronously.
According to Browserscope , browsers make a maximum of 17 connections at a time.
However according to my research , most download managers download files in multi-part and make 8+ connections per file.
1.How many files can be downloaded at a time ?
2.How many chunks for a single can be downloaded at one time ?
3.What should be the minimum size of those chunks to make it worth creating the overhead of creating parallel connections ?
It depends.
While some servers tolerate a high number of connections, others don't. General web servers might be more on the high side (low two digit), file hosters might be more sensitive.
There's little to say unless you can check the server's configuration or just try and remember for the next time when your ban has timed out.
You should however watch your bandwidth. Once you max out your access line there's no gain in further increasing the connections.
Excuse me for my basic question, but I didn't find my answer in my Google searches.
I want to develop a server which should respond to hundreds of clients. Each client may send tens to hundreds of messages per second.
I want to know if I use queuing protocols such as AMQP (RabbitMQ implementation) or ZeroMQ, how many TCP connections the server should supports?
Is it the total number of clients or total number of messages per second?
Nota Bene: ZeroMQ is definitely not a "queuing"-protocol. One shall rather think about it to be a powerful framework of low-level building blocks [primitives] that enable designers setup very fast and rather abstract behaviour-oriented designs for advanced use-cases from Messaging per se to a robust, non-blocking, asynchronous, distributed systems concurrency signalling and content-related transport + controls. Indeed a powerful set of tools, believe me.
AMQP is a Broker-based approach
ZeroMQ is Broker-less solution
Message count per se does not typically create a problem.
Their associated processing typically does.
Limit No.1: operating system TCP-settings constraints
Solution: review your system documentation for limits to work within and setup/tweak values, on the OS-level, as appropriate.
Limit No.2: growing end-point's process delay(s) grows also a risk of RECV/SEND buffer overflow(s).
Solution: review your code-architecture whether it can increase the transaction-performance ( be it by a distributed pipe-line processing or by a load-balancer to distribute the flow of incoming connections / transactions onto multiple target worker-units ).
If you're trying to build an application that needs to have the highest possible sustained network bandwidth, for multiple and repetitive file transfers (not for streaming media), will having 2 or more NICs be beneficial?
I think your answer will depend on your server and network architecture, and unfortunately may change as they change.
What you are essentially doing is trying to remove the 'current' bottleneck in your overall application or design which you have presumably identified as your current NIC (if you haven't actually confirmed this then I would stop and check this in case something else restricts throughput before you reach your NIC limit).
Some general points on this type of performance optimization:
It is worth checking if you have the option to upgrade the current NIC to a higher bandwidth interface - this may be a simpler solution for you if it avoids having to add load balancing hardware/software/configuration to your application.
As pointed out above you need to make sure all the other elements in your network can handle this increased traffic - i.e. that you are not simply going to have congestion in your internet connection or in one of your routers
Similarly, it is worth checking what the next bottle neck will be once you have made this change, if the traffic continues to increase. If adding a new NIC only gives you 5% more throughput before you need a new server anyway, then it may be cheaper to look for a new server right away with better IO from new.
the profile of your traffic and how it is predicted to evolve may influence your decision. If you have a regular daily peak which only exceeds your load slightly then a simple fix may serve you for a long time. If you have steadily growing traffic then a more fundamental look at your system architecture will probably be necessary.
In line with the last point above, it may be worth looking at the various Cloud offerings to see if any meet your requirements at a reasonable cost, possibly even as temporary resource every day just to get you through your peak traffic times.
And finally you should be aware that as soon as you settle on a solution and get it up and running someone else in your organization will change or upgrade the application to introduce a new and unexpected bottle-neck...
It can be beneficial, but it won't necessarily be that way "out of the box".
You need to make sure that both NICs actually get used - by separating your clients on different network segments, by using round robin DNS, by using channel bonding, by using a load balancer, etc. And on top of that you need to make sure your network infrastructure actually has sufficient bandwidth to allow more throughput.
But the general principle is sound - you have less network bandwidth available on your server than disk I/O, so the more network bandwidth you add the better, up until it reaches or exceeds your disk I/O, then it doesn't help you anymore.
Potentially yes. In practice, it also depends on the network fabric, and whether or not network I/O is a bottleneck for your application(s).
When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html
In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.
You will need a high quality servicing interface to handle that kind of load, check out libevent or libev.
That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?
These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.
That will tell you how many connections your computer can support.
The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.
One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections.
Be sure to research your router/ network infrastructure setup just as well.
I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc.
This really depends on your operating system.
Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768.
A typical user limit is 8192 but it can usually be set higher.
I think windows is more limiting but the server version may have higher limits.