Running Asterisk on Windows Server 2008 R2 as a virtual machine? - asterisk

I am configuring a site for a Service Center and i have an HP Proliant server with a dual Xeon CPUs. I want to know if its a good idea to run the Asterisk platform as a virtual machine on Windows Server 2008 R2.
Up to 15 agents will be active concurrently and beside that i will probably need to activate Recording of calls, generating reports etc.

You can run asterisk in vmware or virtualbox. Running in hyper-v never sucess for me. Vmware have more chance to work ok.
But under hi load(more then 10 call) you can experience sound quality issues.
So it is not recomended run in production with 15 concurrent calls.

Since voice is likely the single most demanding network traffic stream that most of us sysadmin will ever come across, you need to be way, way, way, out ahead of this one. Unless you are prepared to dedicate many years, non-stop, of your life to debugging and programming and mastering the tiniest nuances of timing sources inside of guests, using various hypervisors, with various hypervisor configurations, at 1000 clock ticks per second as that relates to which CODEC you are using, baed on whether or not you are going to have 1 call or 10 calls or 100 calls going, and whether or not you will be recording those calls, and whether or not any one else on the system will be having a conference call at the same time, based on the exact version of the daemon and the driver, then, I would humbly and professionally recommend going another route now, and saving your head and your hair for some thing actually worth your time.

Related

How to disallow Perforce UNIX server to generate thousands of IDLE processes

Im' asking this question because we run out of ideas on how to handle the current situation of our perforce versioning server.
The Server
The server is hosted on Scaleway and has a baremetal machine with two SSD under the hood (we know it is no hardware issue).
We are currently using the free license of perforce to evaluate it.
P4 info yields the following:
The Problem
We are using perforce on a UNIX server to version our Unreal Engine 4 project. Lately we discovered that the server stockpiled an amount of 2771 processes where around 80% of them are p4d processes. We suspect these IDLE connections / processes to swamp the server and to be the root of the connectivity issues we encounter at the office.
We enabled monitoring to keep an eye on RUNNING and IDLE processes
p4 configure set monitoring=2
When we now display the monitored processes we see IDLE ones running for more than one hour
p4 monitor show
We already tried disabling leepalive connections with
p4 configure set net.keepalive.disable=1
And we see the following which is going on for a while
The Question
Now the question I want to ask is:
Does anybody else ever has encountered this behaviour with a perforce server on UNIX?
Does anybody knows how we can tell the server that we want to discard IDLE connections ?
EDIT
So after some tracking we discovered that the proxy our office network is behind causes the problems and for some reasons don't allow the connections to close. Does anyone has some clues how to get around these issue?
Based on the monitor output it appears that these clients are opening a bunch of connections and holding them open, basically DOSing the server. You could go through and kill the pids on the server side, but this sounds like a bug in the client that should be raised with Perforce technical support.

Application server hosting

I'm writing a Qt/C++ application and i plan to add a network part with socket connection to a server implemented in Qt also.
If i host locally the server there is no real problem.
But if i want to share my application (client part) with some people and then be sure my server is always running, the best way would be to have a distant server.
Could you give me some clue to do it ? It's not still clear for me for steps to follow in this case.
Is it a better way for that ?
Can i find free hosting ?
Thanks a lot! :-)
There are generally 3 options:
1. Local hosting
This is server running at Your physical location. You can set it clearly as You want and the server will do whatever You want. But must be turned on the whole time, when there is no other work it will just consume power. Also You must get all the hardware (server components), software for running (Operation system), network device and connection (some router, which needs to have special set-up [NAT, port-forward, ...], speed and reachability of the internet connection) and most likely also some security device/SW (firewalls or so).
This is best idea for basic developement and testing. But once the service should work for public audience, it is not really worth to run server Yourself.
2. Remote hosting (virtualized or dedicated server)
This option was the top in last 20-30 years, where all the Web developers and App developers were putting their software on some prepared server. Dedicated is physical server running at some providers' location, who are lending You the hardware (and maybe some license for OS/other SW). Virtualized machine is just 1 hardware piece (server) with multiple virtual servers on it (more clients running on same hardware).
This got generally benefits as the networking/security/hardware issues are being carried out by the hosting owner. You are just borrowing some diskspace and computer time/performance. Normally the company will provide whole server, on which You can set up several services, run multiple protocols, etc..
Ideal solution for webs and single/few (not much) instances of server application(s).
3. Cloud hosting
This is the newest technology at the moment (alive around 10-15 years [eg. AWS running since 2006, Azure since 2010]). Datacenter owners (from 2. point) get better and created some applications on the servers, which will do all the work for You (mostly automatically). In few clicks the servers are running and application can be deployed, used database engines, web pages, IOT hubs, ... quite lot of stuff. Benefits are clearly that You just have to spent minimum of time to set up things and they will run. With high uptime (eg.: 99.9995%).
Difference between dedicated & cloud: On dedicated server there can be put almost any OS which fits the needs, run just services You want, have full control. In cloud solution, You don't have so much of "physical" control and the data moreless live somewhere in Datacenters all over the world. But generally it is more scalable solution and once Your app will be used by lot of users from public sector, this is best way to go.
Common ideology:
The most common solution is that when You develop, You create local server on which You deploy, test, improve. Once stable, order a server either on cloud or as dedicated/virtual machine and deploy it there. Some developers knows that their App will run on cloud services from the very beggining so they order it and start developing against it, but in most cases there is no need for that.

asp.net runs fast then slow on restart

Why is it that every time the server goes down, and asp.net restarts, the response time is SUPER FAST when it comes back up for a few minutes. I assume because everyone is off the server and I am one of the few (or only) people back on the server so quick?
I have discussed this with our developers and they say the response time is due to everyone on the server normally (200+ desktops) and when you are the only person on there, it flys. Really? Then does that mean we need newer, faster web servers?
I am not a programmer, but I think there may be two answers, one is what the devs say above is true, and two is the system is accumulating temp files of some sort and they get cleared out when the server crashes and then restarts.
How do we prove who might be right? Where does one start to look for asp.net bottlenecks?
windows server 2003
asp.net 3.0
iis6
12GB ram
sql server 2005 (db admin says there is no load issue on sql..)
Some very basic steps that you can follow and check if your server work on limits are:
First you download the Process Explorer from sysinternals and you run it to see two things.
Is your server work on their memory limit ?
If yes then what program eats the memory, usually SQL Server 2005 use a lot of memory for database cache, and this is done after many time of work.
Did the server use all of his computing power, if yes, check what program is the one that need all that computing power.
Now next step, download the TCPView from sysinternals, run it and see how many connections are done, how fast, etc... There you can see anomalies, or if the computer is also on their limit.
Final step is to defrag your disks.
Also have in mine that the asp.net session is lock the entire view on all users. So if you have one application on web, with too many users, and each user, or some users, make long time processing on their calls, then this can cause delay to all the users.

is it possible to limit the network traffic from my PC to my PC?

Hi Guys I'm debugging some CS program and to view the performance of the application in slow internet I tried many different ways. However the best would be the Server and the client be in the same PC ---- my debugging environments for both the server side and the client is setup in one PC.
So I'm wondering is there anyway to limit the speed? I'm using TCP but I don't know too much in-depth knowledge of it.
Thank you
There are two important factors regarding a "slow" internet connection that you need to test out since they have different implications for your application: bandwidth and latency.
If you provide some more details about what os you are running your tests on, it would be easier to recommend a way to limit the network performance.
On a related side note, it's generally a bad idea to performance test any kind of networking using the loopback device on your machine, since many aspects of this will perform very different than the regular network device on your machine.
You mention in the comments this needs to be done on windows, while the Network Emulators I know of (e.g. netem, TCN, other variants) all require Linux. So one thing you could do is create a virtual machine (VirtualBox is fine, I did similar things with it), install linux on it, configure 2 network interfaces, emulate the slow/long/lossy/jittery network between them, and route the test traffic through it from windows.
Finally I found this does what I need.
http://www.nirsoft.net/utils/socket_sniffer.html
Captures Windows Socket traffic, no matter it's local or not.

ASP.NET Hosting on Virtual Servers running on VMWare

My Company is running several international websites for selling insurance products.
Our current setup is a Webfarm with multiple Loadbalanced Webservers hosting our ASP.NET applications. The backend is a single - yet powerful - SQL Server. (all in one data center)
Our network admins want to move to virtual servers running on VMWare.
Scenarios could be
Webfarm: Multiple standard webservers, Loadbalanced (current setup), Session state on SQL Server
Virtual Webfarm: Multiple virtual servers, loadbalanced on one physical VMWare Host, Session state on SQL Server
2.a same as above but with multiple physical hosts
Single Virtual Webserver: One big powerful virtual webserver, no loadbalancing required, session state can be kept in process
There is a big hype around virtualization and I can see the benefits, but have no experience with this. I cannot tell what issues we will face and to what we should pay special attention.
Does anyone have experience with such a virtual setup?
What are general recommendations?
I tend towards 2a. I am afraid of having all webservers on one single physical machine.
Many thanks in advance to share your thoughts.
There are three reasons to use more than one webserver for an application:
Scaling - More grunt is required than one machine can provide
Reliability - Website should keep running in case of failure (a. hardware b. software)
Prioritization - One of the webservers takes on heavy work (perhaps scheduled tasks) leaving the other to respond to client requests quickly.
Marrying that up to you scenarios:
Scenario 1 provides 1, 2, 3
Scenario 2 provides 2b (perhaps 2a if it is fully hardware redundant (doubt it))
Scenario 2a provides 1, 2
Scenario 3 provides none of the above
Advantages of Virtual Hosting:
Lower Total Cost of Ownership (TCO) on big cluster serving multiple purposes is cost effective
New servers can be created quickly if needed
Redundant hardware is easier to justify if the cost is shared among many applications
Disadvantages:
Other virtual machines may suck away your CPU/Disk IO capacity
IMHO there is little point to load balancing multiple virtual machines on the same virtual server.
Robert's pretty much covered it all, I'm mostly just adding a note to say that at least one of our clients is currently running with option 2a.
So we have multiple loadbalanced web servers running on a couple of VM hosts, talking to a non-virtualised SQL cluster - this works quite well for them.
One other advantage of virtualisation is that it allows you to more fully utilise your hardware - however, you need to be aware that if you're running your virtual host at around 90% capacity with multiple VMs, you've not got a lot of spare capacity for any traffic spikes - if you're not expecting any, then great, but if you are, you'll need to have something in place to cope.
I agree with all of the above answers, and I actually work at a webhost. :-) If you're using multiple load-balanced webservers now then I can only assume the reason for it is either
Hardware Redundancy: If a single app server fails then those sessions are lost, but the app keeps running on the other servers and users can immediately re-connect.
or
Application Load Distribution (it's late so I can't think of a better name): Your traffic dictates that you have multiple app servers since all of your users would crash a single app server.
If #1 is the reason, then going to VMWare defeats the purpose since you only have one server supporting everything, and in case of hard drive crash, etc, you are down while it is repaired. If #2 is the reason then a VMWare based solution MAY work, however keep in mind that the hardware you'd use would almost necessarily be of a higher caliber than what you're currently using. So you maybe get more bang for your buck, but you STLL lose the redundancy that multiple physical machines gave you.
Now, you could always combine the two by having multiple physical machines all running VMWare, but that adds a level of complexity to things that you may not necessarily want either.
It doesn't sound like there would be any tangible benefit from running multiple virtual servers on the same physical host, you're just adding overhead. Unless I'm missing something with the way you've described the setup, there wouldn't be any benefit at all from moving to VMware - unless you're looking at taking advantage of features such as VMotion
VMware is most useful for consolidating underutilized hardware. If your hardware is running at near-capacity during peak periods then you don't want to run multiple VMs on the one machine.
There are benefits to Virtualization but your network admins need to prove that there is a benefit for your company before you even consider switching. I would say if you have multiple apps running on dedicated servers with low traffic (i.e. each app has it's own physical server) then sure, Virtualize. If you have one app over many servers, then don't.
You should be able to use virtual machine hosts with multiple vm per host and load balance across all of them.
Microsoft is doing this with msdn and technet http://virtualization.info/en/news/2008/05/microsoft-migrates-msdn-and-technet-on.html.

Resources