How to setup failover capability for an AutoSys job? - autosys

I have two unix servers, x1 and x2.
x1 is the primary server and x2 is the failover server.
In normal scenarios where x1 is up and running, we don't want anything to be run on x2. All the load should be handled by x1.
If x1 goes down, we want x2 to take on the load.
Is this easily achievable in AutoSys? I understand that there's load balancing capability, but we don't want both machines to be handling the load.
Looking at the AutoSys guide, I was thinking of maybe trying this example:
insert_machine: myvirtualmachine
machine: x1
factor: 1
machine: x2
factor: .01
The idea is to set the factor so low for x2 that it should never pick up any of the work unless x1 goes down. But I find this solution a bit crude.

There is no real way of doing this in Autosys. What we did is have a DNS entry created and instead of having the machine definition on the server it self it was setup for the DNS entry. If the 01 goes down the DNS knows to send traffic to 02.
So not handled by Autosys but your Cisco or F5 with DNS created.
Let me know if that helps.
Dave

Related

ProxySQL: how to configure a failover?

How can I config ProxySQL to have a failover, independent from read-only or not. The parameter 'weight' in mysql_servers do not work properly in this case. I have some nodes (MariaDB 10.3) with master-master-replication, and If node1 goes offline, node2 should perform the statements. If node1 is back, it should be the first server again.
With my setup (all 3 servers in one hostgroup, only with different 'weight') the failover works, but with up to 9 seconds delay for the first statements after failover, and if node1 is back, the first server remains on node2.
How can I configure a failover with a fixed priorisation the best way? Should it be done in the hostgroup or in the rules? Or maybe it's not possible with ProxySQL?
PS: Maybe a similar question: ProxySQL active-standby setup

Autosys Machine Container

I am looking to set up a machine container in Autosys to look like the below example:
Example_Example_MAIN
Example_Example_MAIN.Machine_Name1
Example_Example_MAIN.Machine_Name2
Example_Example_MAIN.Machine_Name3
Example_Example_MAIN.Machine_Name4
The way i am currently controlling these machine is to send 2,3 & 4 Offline and leave 1 Online. Then if 1 goes Offline then i will send 2 Online and the batch will run on that machine.
Is it possible to leave all machines inside of a container Online but specify a machine priority? For example if i leave all machines Online then the batch will automatically target Machine_Name1 but if 1 goes Offline then the batch will automatically target machine 2 and so on.
Sorry if this is a silly question, i'm still only a beginner!
Thank you in advance!
Cameron.
Yes, you can place all of your machines in a single pool. Autosys will only send jobs to the machines in the pool that are Online.
To do further load balancing than that, you'll have to configure the Factor (how fast it is relative to your other machines) and Max_Load (how much work it can handle at once) for every machine in the pool, as well as setting Job_Load units on each of your jobs indicating how much CPU they consume when running.
Refer to Chapter 3 of the Autosys user guide for the full details.

Rerouting Application Network Traffic at the Data Link Layer

Consider the following situation:
You have an application you are tesing, but in order to test the networking functionality of said program, you are required to run multiple instances of it and have them communicate with one another.
Possible solutions are:
- Run software on individual machines connected by WAN or LAN.
- Run the software on virtual machines, all on the same computer.
I do not want to use either of these methods (the reasoning is irrelevant). I want to know if there is a way that I can reroute network transmissions from the test application (ideally in any programmming language) in a way such that I can run multiple instances of the same software on one computer, and have them behave as if they were the only instance running on that computer.
In other words, I want to be able to code the application so that each instance listens on the same "listening" port (since only one instance will be running on each computer when in production). Then, I want to know if I can reroute the network requests at a lower level then the application so that they do not interfere with eachother (clash over the same port number).
Essentially, I want to build a virtual environment which only redirects the network calls (whereas a virtual machine takes far more resources, and has way more involved). Is this possible, and how might I approach this problem?
Thank you!
UPDATE: This is a more accurate idea of what I want to accomplish:
Basically, I want to program another application which TRANSPARENTLY redirects bind requests to available ports, and manages which applications are bound where... So from the applications perspective, all the instances are bound to port 1000, but in reality, this other application is automatically managing which instance is bound where, and avoiding potential conflicts. I feel like this could be accomplished with Windows Hooks, but I'm not sure how you could implement this?
As far as I know, there is no sane way to multiplex the same port on the same network device. At the very minimum, you will need to choose on of the following:
Run each instance of your program on a different port
Create multiple virtual network interfaces
The first choice is easy and may be the one I would choose. The second one is more towards what you are looking for but it would be a true PITA to set up - you can look into VirtualBox and its host-only networks for inspiration. If you are writing things on linux you might look into pipes and chrooting but you'll be spending more time setting up this environment than writing your software.

Advanced: Link aggregation, MPIO, iSCSI MC/S

I am trying to find the proper way of accomplishing the following.
I would like to provide 2Gb/s access for clients accessing a fileserver guest vm on a ESXi server, which itself access the datastore over iSCSI. Therefore the ESXi server need 2Gbps connection to the NAS. I would also like to provide 2Gbps directly on the NAS.
Looks like there are three technology which can help. Link aggregation (802.3ad, LAG, Trunk), Multi Path IO (MPIO), and iSCSI Multiple connection per session (MC/S).
However each have their own purpose and drawbacks, Aggregation provide 2Gbps total but a single connection (I think it's based on source/dest MAC address) can only get 1Gbps, which is useless (I think for iSCSI for example which is a single stream), MPIO seem a good option for iSCSI as it balance any traffic on two connection however it seem to require 2 IPs on the Source and 2 IPs on the DEST, I am unsure about MCs.
Here is what I would like to archive, however I am not sure of the technology to employ on each NIC pair of 1Gbps.
I also think this design is flawed because doing link aggregation between the NAS and the switch would prevent me from using MPIO on the ESX as it also require 2 IP on the nas and I think link aggregation will give me a single IP.
Maybe using MCs instead of MPIO would work?
Here a diagram:
If you want to achieve 2Gbps to a VM in ESX it is possible using MPIO & iSCSI but as you say you will need two adapters on the ESX host and two on the NAS. The drawback is that your NAS will need to support multiple connections from the same initiator, not all of them do. The path policy will need to be set to round-robin so you can use Active-Active connections. In order to get ESX to use both paths # over 50% each you will need to adjust the round robin balancing mode to switch paths every 1 IOPS instead of 1000. You can do this by SSHing to the host and using esxcli (if you need full instructions on how to do that I can provide them).
After this you should be able to run IOMeter on a VM and see the data rate # over 1Gbps, maybe 150MB/s for 1500 MTU and if you are using jumbo frames, then you will get around 200MB/s.
On another note (which might prove useful to your setups in the future), it is possible to achieve 2Gbps with two adapters on the source and bonded adapter on the NAS (so 2 → 1) when using the MPIO iSCSI Initiator that comes with Server 2008. This initiator works slightly different to VMWare and doesn't require your NAS to support many connections from one initiator — from what I can tell it spawns multiple initiators instead of sessions.

Websphere 6.1: Issue in Multiple Cells Call using IIOP

Need some help with below issue
We have 2 machines, each of these machines has 2 websphere cells installed in it.
Machine 1 (X1 and X2 cell)
Machine 2 (Y1 and Y2 cell)
We have web application installed on X1 cell, which has EJB client component, which invokes business methods on EJB component installed on each of 4 cells i.e. X1 , X2 , Y1 and Y2. EJB client component look-ups the home interface using IIOP URL look-up, using InitalContext class.
Communication of EJB client component with B1 and B2 happens, properly, without any issues. But communication with X2 cell, does not happens properly (does not see any home interface look-up problems in logs), it somehow calls business methods on X1 server itself.
We had a plain Java client which uses main() method to invoke all four servers. This setup was up & running in production for 2 years. But the problem started since we moved the logic of invocation of four servers in Web application instead of main().
What difference does it makes that X1 and X2 are on the same physical machine?
If the servers have the same name, then I suspect you need the com.ibm.websphere.orb.uniqueServerName property specified in the "Two servers with the same name running on the same host are being used to interoperate" of this InfoCenter article:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/rtrb_namingprobs.html
I encountered this problem once on a test system. It occurs if WebSphere (incorrectly) determines that the EJB actually runs in the local server. In my case this occurred with two servers running on the same host and configured with the same server name (server1). Unfortunately I don't know any solution (other than reinstalling one of the servers with a different server name).

Resources