Measuring speed between Unix server and PC - unix

My company just purchased identical Unix servers to the ones we are currently using. The new servers are going to be hosted about 1000 miles away in Langhorn, PA while the current servers are hosted locally. Once the hosted servers are finished being set up, the local servers will be going away. My boss asked me to try and find a tool that could help us determine how much longer it will take to move files from the server to our local PC for the local servers vs. the remote servers.
Does anyone know of a tool that could accomplish this? Obviously I could just move a large file and use wall clock time, but depending on what is running on the server, I doubt this would be very accurate.
Thanks in advance!
Aaron

Most FTP programs i've used show transfer times, FileZilla for example:
Status: File transfer successful, transferred 118,782,740 bytes in 11 seconds

Related

How to disallow Perforce UNIX server to generate thousands of IDLE processes

Im' asking this question because we run out of ideas on how to handle the current situation of our perforce versioning server.
The Server
The server is hosted on Scaleway and has a baremetal machine with two SSD under the hood (we know it is no hardware issue).
We are currently using the free license of perforce to evaluate it.
P4 info yields the following:
The Problem
We are using perforce on a UNIX server to version our Unreal Engine 4 project. Lately we discovered that the server stockpiled an amount of 2771 processes where around 80% of them are p4d processes. We suspect these IDLE connections / processes to swamp the server and to be the root of the connectivity issues we encounter at the office.
We enabled monitoring to keep an eye on RUNNING and IDLE processes
p4 configure set monitoring=2
When we now display the monitored processes we see IDLE ones running for more than one hour
p4 monitor show
We already tried disabling leepalive connections with
p4 configure set net.keepalive.disable=1
And we see the following which is going on for a while
The Question
Now the question I want to ask is:
Does anybody else ever has encountered this behaviour with a perforce server on UNIX?
Does anybody knows how we can tell the server that we want to discard IDLE connections ?
EDIT
So after some tracking we discovered that the proxy our office network is behind causes the problems and for some reasons don't allow the connections to close. Does anyone has some clues how to get around these issue?
Based on the monitor output it appears that these clients are opening a bunch of connections and holding them open, basically DOSing the server. You could go through and kill the pids on the server side, but this sounds like a bug in the client that should be raised with Perforce technical support.

Application server hosting

I'm writing a Qt/C++ application and i plan to add a network part with socket connection to a server implemented in Qt also.
If i host locally the server there is no real problem.
But if i want to share my application (client part) with some people and then be sure my server is always running, the best way would be to have a distant server.
Could you give me some clue to do it ? It's not still clear for me for steps to follow in this case.
Is it a better way for that ?
Can i find free hosting ?
Thanks a lot! :-)
There are generally 3 options:
1. Local hosting
This is server running at Your physical location. You can set it clearly as You want and the server will do whatever You want. But must be turned on the whole time, when there is no other work it will just consume power. Also You must get all the hardware (server components), software for running (Operation system), network device and connection (some router, which needs to have special set-up [NAT, port-forward, ...], speed and reachability of the internet connection) and most likely also some security device/SW (firewalls or so).
This is best idea for basic developement and testing. But once the service should work for public audience, it is not really worth to run server Yourself.
2. Remote hosting (virtualized or dedicated server)
This option was the top in last 20-30 years, where all the Web developers and App developers were putting their software on some prepared server. Dedicated is physical server running at some providers' location, who are lending You the hardware (and maybe some license for OS/other SW). Virtualized machine is just 1 hardware piece (server) with multiple virtual servers on it (more clients running on same hardware).
This got generally benefits as the networking/security/hardware issues are being carried out by the hosting owner. You are just borrowing some diskspace and computer time/performance. Normally the company will provide whole server, on which You can set up several services, run multiple protocols, etc..
Ideal solution for webs and single/few (not much) instances of server application(s).
3. Cloud hosting
This is the newest technology at the moment (alive around 10-15 years [eg. AWS running since 2006, Azure since 2010]). Datacenter owners (from 2. point) get better and created some applications on the servers, which will do all the work for You (mostly automatically). In few clicks the servers are running and application can be deployed, used database engines, web pages, IOT hubs, ... quite lot of stuff. Benefits are clearly that You just have to spent minimum of time to set up things and they will run. With high uptime (eg.: 99.9995%).
Difference between dedicated & cloud: On dedicated server there can be put almost any OS which fits the needs, run just services You want, have full control. In cloud solution, You don't have so much of "physical" control and the data moreless live somewhere in Datacenters all over the world. But generally it is more scalable solution and once Your app will be used by lot of users from public sector, this is best way to go.
Common ideology:
The most common solution is that when You develop, You create local server on which You deploy, test, improve. Once stable, order a server either on cloud or as dedicated/virtual machine and deploy it there. Some developers knows that their App will run on cloud services from the very beggining so they order it and start developing against it, but in most cases there is no need for that.

Running Asterisk on Windows Server 2008 R2 as a virtual machine?

I am configuring a site for a Service Center and i have an HP Proliant server with a dual Xeon CPUs. I want to know if its a good idea to run the Asterisk platform as a virtual machine on Windows Server 2008 R2.
Up to 15 agents will be active concurrently and beside that i will probably need to activate Recording of calls, generating reports etc.
You can run asterisk in vmware or virtualbox. Running in hyper-v never sucess for me. Vmware have more chance to work ok.
But under hi load(more then 10 call) you can experience sound quality issues.
So it is not recomended run in production with 15 concurrent calls.
Since voice is likely the single most demanding network traffic stream that most of us sysadmin will ever come across, you need to be way, way, way, out ahead of this one. Unless you are prepared to dedicate many years, non-stop, of your life to debugging and programming and mastering the tiniest nuances of timing sources inside of guests, using various hypervisors, with various hypervisor configurations, at 1000 clock ticks per second as that relates to which CODEC you are using, baed on whether or not you are going to have 1 call or 10 calls or 100 calls going, and whether or not you will be recording those calls, and whether or not any one else on the system will be having a conference call at the same time, based on the exact version of the daemon and the driver, then, I would humbly and professionally recommend going another route now, and saving your head and your hair for some thing actually worth your time.

NBD client and server on same machine

Is there any way to run an NBD (Network Block Device) client and server on the same machine without deadlocking the system?
I am very exhausted looking to find an answer for this. I appreciate if anyone can help.
UPDATE:
I'm writing an NBD server that talks to Google Storage system. I want to mount a file system on the NBD and backup my files. I will be hugely disappointed if I have to end up running the server on another machine. Few ideas I already had seem to lead nowhere:
telling the file system to open the block device using O_DIRECT flag to bypass the linux buffer cache
using a raw device (unfortunately, raw devices are character devices and FSes refuse to use them as underlying device)
Just for the record, having the NBD client and server on the same machine has been possible since 2008.
Use a virtual machine (not a container) - you need two kernels, but you don't need two physical machines.
Since the front page of the Sourceforge project for NBD say that a deadlock will happen "within seconds" in this scenario, I'm guessing the answer is a big "No."
Try to write a more complete question of what actual goal you're trying to accomplish. There's some times that you need to bang away at a little problem, and some times that you need to look at the big picture.

ASP.NET Deployment to Multiple Web Servers

I'm putting together my deployment plan for a major deployment next week (basically taking over a site).
I've never had to deploy to multiple web servers before.
Do I need to copy the files to each web server, or is there a tool which will do this for me?
I have to supply the IP address to some 3rd party vendors, which IP do I give them since there are four separate servers?
Please check this thread, hope this will help you: What method do you use to deploy ASP.Net applications to the wild?
I would of expected that there would be a load balancer which would spread the traffic between the servers. In which case you would give out the IP address of the external interface of the load balanacer.
For updates in this scenario I would typically take one server out of the loop for the load balancer then update that server, test it works ok then if you have 4 servers take another out and update/test that server. Then switch the load balancer so that the 2 updated servers are live and the other 2 are offline update/test those servers and then put them back into the loop so they're live and your update is complete with no downtime. Of course I'd typically do this during a period of low traffic where possible.
Whether you do this using some sort of automatic script or manually would depend on what systems you have in place and how often you would expect to make updates.
It's worth saying that Microsoft have since released a couple of tools to help with this:
http://www.iis.net/download/webdeploy
http://www.iis.net/download/WebFarmFramework

Resources