Autosys Machine Container - autosys

I am looking to set up a machine container in Autosys to look like the below example:
Example_Example_MAIN
Example_Example_MAIN.Machine_Name1
Example_Example_MAIN.Machine_Name2
Example_Example_MAIN.Machine_Name3
Example_Example_MAIN.Machine_Name4
The way i am currently controlling these machine is to send 2,3 & 4 Offline and leave 1 Online. Then if 1 goes Offline then i will send 2 Online and the batch will run on that machine.
Is it possible to leave all machines inside of a container Online but specify a machine priority? For example if i leave all machines Online then the batch will automatically target Machine_Name1 but if 1 goes Offline then the batch will automatically target machine 2 and so on.
Sorry if this is a silly question, i'm still only a beginner!
Thank you in advance!
Cameron.

Yes, you can place all of your machines in a single pool. Autosys will only send jobs to the machines in the pool that are Online.
To do further load balancing than that, you'll have to configure the Factor (how fast it is relative to your other machines) and Max_Load (how much work it can handle at once) for every machine in the pool, as well as setting Job_Load units on each of your jobs indicating how much CPU they consume when running.
Refer to Chapter 3 of the Autosys user guide for the full details.

Related

IIS bottleneck?

I have 3 machines - one which is IIS, one with a database and one from which I test the efficiency of my application - which means:
Using The Grinder I run 1000 instances of my application (hosted on the IIS and operating with the database on the machine with SQL Server). And using perfmon I observe that there really are 1000 requests.
BUT the problem is that connecting to this application (IIS) from another computer is very slow. I suppose there is some bottleneck on the IIS side but I cannot find it - CPU usage is less than 10%.
I think I changed every option in the IIS Manager and machine.config and web.config files - nothing seems to have any effect.
First thing is you need to confirm if you have a slowness issue while browsing the site
Check the IIS logs and look for the time-taken field. If the time taken is more than 10 seconds then it is considered as a Slowness.
The slowness might be because of several reasons. It might be because of the Network or might be because something in your code might be causing it.
I suggest you to capture a Network trace using Netmon or WireShark in case if its a Network related.
If its not Network you can collect a Process dump using Debug diag 2 update 2 tool.
You can check the below link to collect the dumps and check them and try to find out if there is any slowness:
https://msdn.microsoft.com/en-us/library/ff420662.aspx

Best Practice IIS Web Sites

I am looking for general information regarding the below scenario. I am just looking for opinions. I myself think the first option is better and offers less headaches.
You have an IIS website (www.site.com) that is hit millions of times per day. You have 5 web servers serving the traffic. After a while, the worker processes begin to reach their limit. There are 3 worker processes and one app pool per server.
Option 1: Turn these 5 physical servers into virtual hosts and run 4 VMs from each. That increases the pool of servers to 20. Drop worker processes to 2 and have 1 app pool per VM.
Option 2: Add 5 for IPs for each physical server and 5 instances of the same site on each physical server. For example, Server 1 will have 5 IPs and 5 IIS app pools and 5 IIS websites called something like this. Site1, Site2, Site3, Site4, Site5. Yet all of these go to www.site.com.
I personally think Option 2 is ridiculous.
Please let me know what you think.
Good you ask. Both options seem to go in the wrong way.
Option 1 : Turn a physical server into a host and set up 4 virtual machines on it. Each VM will get a quarter of the memory, processor cores and processor time. The host also uses some amount of resources itself. This means that you end up with less power after this change.
Option 2 : You're right, it is ridiculous. It will not improve anything, just add useless complexity.
If your management has such absurde ideas like you discribe, you should really hire a consultant. At least you could get some realistic scenarios to choose.
Is there something that requires the web server to run on premises ? I'd recommend to move to a managed server with a hosting company. They would take care of system administration. This will take a burden off your system administrator ( that doesn't seem to be very competent ).

How come my app dies on AppHarbor?

I have a web app that will run forever (at least for a few days) on my local machine using the technique (hack?) described in Jeff Atwood's post: https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
However when I run it on App Harbor my app doesn't run for more than an hour or so (I'm not sure when it dies) as long as I hit the site it stays up so I'm assuming it is being killed after an idle period, but I'm not sure why.
My app doesn't save any state or persist anything. It makes web service calls and survives errors in any calls.
I turned on a ping service to keep my app alive but I'm curious why this works on my local machine but not on App Harbor?
The guys behind App Harbor pays for EC2 instances for all running apps, so they naturally want to limit the cpu usage as much as possible. One way to achieve this is to shut down unused applications very fast and only restart them when someone actually try to access them. Paid hosting should not be limited in this way.
(As far as I have been informed they are able to host around 100k sites on less than twenty medium instances which is certainly quite impressive and calls for a very economic use of resources.)
To overcome the limitation you would need a cron job to ping your app harbour site. But this of course a quite recursive problem since you need app harbour to act as a cron job ;)
AppHarbor recycles the Application Pool frequently to keep sleeping websites from using idle CPU time. This is simply the price you pay of using a shared website hosting plan.
If you really want to run a background job then you should be using AppHarbor's background workers, since this is exactly the type of task they were built to run.
http://support.appharbor.com/kb/getting-started/background-workers
Simply build a new console application that runs your logic and include it in your solution. When you push the code the workers will be started automatically. If you happen to already have other exe's in your solution make sure to edit the app.config and set the 'deploy background worker' value to false.

Running Asterisk on Windows Server 2008 R2 as a virtual machine?

I am configuring a site for a Service Center and i have an HP Proliant server with a dual Xeon CPUs. I want to know if its a good idea to run the Asterisk platform as a virtual machine on Windows Server 2008 R2.
Up to 15 agents will be active concurrently and beside that i will probably need to activate Recording of calls, generating reports etc.
You can run asterisk in vmware or virtualbox. Running in hyper-v never sucess for me. Vmware have more chance to work ok.
But under hi load(more then 10 call) you can experience sound quality issues.
So it is not recomended run in production with 15 concurrent calls.
Since voice is likely the single most demanding network traffic stream that most of us sysadmin will ever come across, you need to be way, way, way, out ahead of this one. Unless you are prepared to dedicate many years, non-stop, of your life to debugging and programming and mastering the tiniest nuances of timing sources inside of guests, using various hypervisors, with various hypervisor configurations, at 1000 clock ticks per second as that relates to which CODEC you are using, baed on whether or not you are going to have 1 call or 10 calls or 100 calls going, and whether or not you will be recording those calls, and whether or not any one else on the system will be having a conference call at the same time, based on the exact version of the daemon and the driver, then, I would humbly and professionally recommend going another route now, and saving your head and your hair for some thing actually worth your time.

NBD client and server on same machine

Is there any way to run an NBD (Network Block Device) client and server on the same machine without deadlocking the system?
I am very exhausted looking to find an answer for this. I appreciate if anyone can help.
UPDATE:
I'm writing an NBD server that talks to Google Storage system. I want to mount a file system on the NBD and backup my files. I will be hugely disappointed if I have to end up running the server on another machine. Few ideas I already had seem to lead nowhere:
telling the file system to open the block device using O_DIRECT flag to bypass the linux buffer cache
using a raw device (unfortunately, raw devices are character devices and FSes refuse to use them as underlying device)
Just for the record, having the NBD client and server on the same machine has been possible since 2008.
Use a virtual machine (not a container) - you need two kernels, but you don't need two physical machines.
Since the front page of the Sourceforge project for NBD say that a deadlock will happen "within seconds" in this scenario, I'm guessing the answer is a big "No."
Try to write a more complete question of what actual goal you're trying to accomplish. There's some times that you need to bang away at a little problem, and some times that you need to look at the big picture.

Resources