Keeping the order of actions for two servers (local and remote host) - pyinfra

i have a challenge deploying my code on a remote server, because i don't know how to keep a special order of actions in the local and remote machine.
Set up server (remote)
Prepare data (local)
Copy files (local to remote)
Process files (remote)
Copy files (remote to local)
Start services (remote)
I start my deploymnet with
$ pyinfra inventory.py deploy.py
This is my inventory.py
my_hosts = ["cattleserver"]
my_local = ['#local']
In deploy.py i try to map the order above with corresponding code.
But this seems not to be possible, because pyinfra has no concept for order (correct me if i am wrong). It just guarantess an end state.
Is there a way to map my order above?
Regards,
Alexander

Related

Difference between h2o.download_mojo() and h2o.saveMojo()?

Both functions are available in version 3.18.0.4 and the only apparent difference is that h2o.saveMojo(force = T) allows you to overwrite an existing file with the same name. Why are there 2? Any relative (dis)advantage?
Cheers
H2O-3 is a client-server architecture. The H2O Flow Web UI, the R session, the Python session are all clients. The H2O java process is the server.
Often, the client and server are often running on the same host (for example, in the case where h2o is started with h2o.init()), and in those situations it can be hard to tell the difference between the client and the server. But when you start a multi-node H2O job on Hadoop and connect to it explicitly from an R session using an IP address, the client and server separation becomes quite obvious to the user.
So with that as background:
h2o.download_mojo() is the client pulling the mojo artifact, and storing it to the client filesystem
h2o.saveMojo() is the server pushing the mojo artifact, either to the server filesystem or to a network filesystem (e.g. HDFS)

Should i use Coherence standalone server ? for a java webservice to use cache data ?

I am new to oracle coherence
Basically we have some data and we wanted some java/bpel webservice to get those data from coherence cache instead of database. [we are planning to load all those data to cache server]
So we have below questions before we start this solution.
Webservice we are planning to start is going to be just java would be fine.
And all operations are reading only.
Question
1. IS it Coherence needs to be stand alone server ? (down load from oracle and install it separately and run the default cacheserver) ?
2.If so we are planning to do the pre loading of data from database to cache server by using code ? i hope thas possible ? Any pointers would be helpful ?
3.How does the webservice connect with Coherence server if webservice running in different nmachine vs coherence server running ?
(OR)
Is it mandatory that webservice and coherence should run in the same machine ?
If webservice can run in different machine how does the webservice code connects to coherence server (any code sample , url would be helpful) ?
Also what is that coherence comes with weblogic ? Is it not fit for our applications design i assume ?!!!! then what type of solution we go for weblogic with coherence ?
FYI : Our goal is simple we want to store the data in cache server and have our new webservice to retrieve the data from Cache servere instead of database(because v are planning to avoid database trip )
Well, you questions are very open and probably have more than 1 correct answer. I'll try to answer all of them.
First, please take into consideration, that Coherence is not a free tool and you have to pay for a license.
Now, to the answers:
Basically, coherence has 2 parts: proxy and server. The first is the responsible to routing your requests and the second for hosting the data. You can run both together in the same service but this has pros&cons. One Con is that your services will be very loaded and the memory will be shared between two kind of processes. A pro is that is very simple to run.
You can preload all the data from the DB. For that you have to open the Coherence and write your own code. For that, you need to define you own cachestore (look for that keyword in coherence docs) and override the loadAll method.
As far I remember Coherence comes together with Weblogic. That says the license for the one is the license for the second and they come in the same product. I'm not familiar with weblogic, but I suppose is a service of the package. In any case, for connecting to coherence you can refer to Configuring and Managing Coherence Clusters
The coherence services can run in different machines, in different network and even in different places of the world if you want. Each, proxy or server, consumer and DB, could be in a different network. Everything can be configured. You have to say you weblogic server where the coherence proxy will be, you'll set in the coherence proxy/server the addresses of them and you'll configure your coherence server for finding out his database by configuration. Is a bit complicated to explain everything here.
I think I answered before.
Just take into consideration coherence is a very powerful tool but very complicated to operate and to troubleshoot. Consider the pros/cons of accessing directly your DB and think about if you really need it.
If you have specific questions, please don't hesitate. Is a bit complicated to explain everything here since you're trying to set up one of the complicated system I ever seen. But is excellent and I really recommend it.
EDIT:
Basically Coherence is composed by 2 main parts: proxy and server. The names are a bit confusing since both are servers, but the proxy serves to the clients trying to perform cache operations (CRUD) while the "servers" serve to the proxies. Proxy is the responsible for receiving all the requests, processing them and routing them, according to their keys, to the respective server who holds the data or who would be the responsible for holding it if the operation requires a loading act. So the answer to your questions is: YES, you need at least one proxy active in your cluster, otherwise you'll be unable to operate correctly. It can run on one of your machines on into a third one. Is recommended to hold more than 1 proxy for HA purposes and proxies can act as servers as well (by setting the localstorage flag to true). I know, is a bit complicated and I recommend to follow oracle docs.
Essentially, there are 2 types of Coherence installation.
1) Stand-alone installation (without a WebLogic Server in the mix)
2) Managed installation (with Weblogic Server in the mix)
Here are a few characteristics for each of the above
Stand-alone installation (without a WebLogic Server in the mix)
Download the Coherence installation package and install (without any dependency on existing WebLogic or FMW installations)
Setup and Configure the Coherence Servers from the command-line
Administer and Maintain the Coherence Servers from the command-line
Managed installation (with Weblogic Server in the mix)
Utilize the existing installation of Coherence that was installed when WebLogic or FMW was installed
Setup and Configure the Managed Conherence Servers to work with WebLogic server
Administer and Maintain the Managed Coherence Servers via the WebLogic console
Note the key difference in terminology, Coherence Servers (no WL dependency) vs. Managed Coherence Servers (with WL dependency)

SQL Server session in CLUSTER

Can anyone help me with this ...
I have a 3 node sql server cluster lets say N1, N2 and N3. The name for the three node cluster is SQLCLUS.
The application connects to database using the name SQLCLUS in connections strings name.
The application uses SQL Server session manangement. So I remote desktopped to N1 (which is active while N2 and N3 are passive) and from the locaiton
C:\Windows\Microsoft.NET\Framework64\v2.0.50727
I executed the following command
aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p
The command executed successfully. I could then login into SQLCLUS and see ASpState database created with 2 tables.
I then tested the applciation which uses the SQL Server session and it also works fine.
Now my question is ...
If there is a fail over to node N2 or N3 will my application still work ?. I did not execute the above command (aspnet_regsql.exe ) from N2.
Should I execute the command , aspnet_regsql.exe -S SQLCLUS -E -ssadd -sstype p , in N2 and N3 too ?
What changes happens in a sql server after executing the above command ?. I mean , is there any kind of service ot settings changes that can be seen ?.
Greatly apprecite any in puts regarding this....
Thanks in advance...
3.
Sql Server failover clustering can be conceptually explained as a smoke-and-mirrors dns hack. Thinking of clustering in familiar terms makes you realize how simple a technology it really is.
Simplified description of Sql Server Failover Clustering
Imagine you have two computers: SrvA and SrvB
You plug an external HD (F:) into SrvA, Install Sql Server and configure it to store its database files on f:\ (The executable is under C:\Program Files).
Unplug the HD, plug it into SrvB, Install Sql Server and configure it to store its database files on F:\ in the exact same location.
Now, you create a dns alias "MyDbServer" that points to SrvA, plug the external HD back into SrvA, and start sql server.
All is good until one day when the power supply fails on SrvA and the machine goes down.
To recover from this disaster you do the following:
Plug the external drive into SrvB
Start sql server on SrvB
Tweak the dns entry for "MyDbServer" to point to SrvB.
You're now up and going on SrvB, and your client applications are blissfully unaware that SrvA failed because they only ever connected using the name "MyDbServer".
Failover Clustering in the Reality
SrvA and SrvB are the cluster nodes.
The External HD is Shared SAN Storage.
The three step recovery process is what happens during a cluster failover and is managed automatically by the Windows Failover Clustering service.
What kinds of tasks need to be run on each Sql Node?
99.99% of the tasks that you perform in Sql Server will be stored in the database files on shared storage and therefore will move between nodes during a failover. This includes everything from creating logins, creating databases, INSERTS/UPDATES/DELETES on tables, Sql Agent jobs and just about everything else you can think of. This also includes all of the tasks that aspnet_regsql command performs (it does nothing special from a database perspective).
The remaining .01% of things that would have to be done on each individual node (because they aren't stored on shared storage) are things like applying service packs (remember that the executable is on c:), certain registry settings (some Sql Server registry settings are "checkpointed" and failover, some aren't), registering 3rd party COM dll's (no one really does this anymore) and changing the service account that Sql Server runs under.
Try it for yourself
If you want to verify that aspnet_regsql doesn't need to be run on each node, then try failing over and verify that your app still works. If you do run aspnet_regsql on each node and reference the clustered name (SQLCLUS) then you will effectively be over-writing the database, so if it doesn't error out, it will just wipe out your existing data.

Erlang: starting a remote node programmatically

I am aware that nodes can be started from the shell. What I am looking for is a way to start a remote node from within a module. I have searched, but have been able to find nothing.
Any help is appreciated.
There's a pool(3) facility:
pool can be used to run a set of
Erlang nodes as a pool of
computational processors. It is
organized as a master and a set of
slave nodes..
pool:start/1,2 starts a new pool.
The file .hosts.erlang is read to
find host names where the pool nodes
can be started. The slave nodes are
started with slave:start/2,3,
passing along Name and, if provided,
Args. Name is used as the first
part of the node names, Args is used
to specify command line arguments.
With pool you get load distribution facility for free.
Master node may be started this way:
erl -sname poolmaster -rsh ssh
Key -rsh here specifies an alternative to rsh for starting a slave node on a remote host. We used SSH here. Make sure your box have working SSH keys, and you can authenticate to the remote hosts using these keys.
If there are no hosts in the file .hosts.erlang, then no slave nodes are started, and you can use slave:start/2,3 to start slave nodes manually passing arguments if needed.
You could, for example start a remote node:
Arg = "-mnesia_dir " ++ M,
slave:start(H, Name, Arg).
Ensure epmd(1) is up and running on the remote boxes in order to start Erlang nodes.
Hope that helps.
A bit more low level that pool is the slave(3) module. Pool builds upon the functionality in slave.
Use slave:start to start a new slave.
You should probably also specify -rsh ssh on the command-line.
So use pool if you need the kind of functionality it offers, if you need something different you can build it yourself out of slave.

New host key every day using MSFTP and WinSCP

I am tranfering a file from one server to another using "Core FTP mini-sftp-server" on source side and WinSCP on destination side (both servers are running Windows).
I am logging in these two machine using local admin account which are same on both servers.
I have been doing this process manually:
Start MSFTP server on source
Start WinSCP on destination, connect to source and get the file.
Now I want to automate it and i tried the following
Start msftp from command line on source.
On destination in winscp.exe console:
open login:password#IPAdress
get <file> <destination>
close
exit
The problem with this is if I do it for the first time everyday, it asks me to update the key at the destination side saying:
"WARNING POTENTIAL SECURITY BREACH! The server’s host key does not
match the one WinSCP has is cache. This means that either the server
administrator has charged the host key, the server presents different
key under certain circumstances, or you have actually connected to
another computer pretending to be the server"
I have to manually do it (click on Update) at first and then for the following copies, the automation works.
Question:
How can I update the key using cmd line while connecting to the server?
Can I prevent the source to generate new key daily? Or should I do it?
You should prevent the source server generating a new key - there is absolutely no reason to do so. The server's public key identifies the server, and so this identity shouldn't be changed.
You are losing any security by connecting to a SSH server that changes public key every day.
Anyway, if that's your only option, recent WinSCP allows accepting any host key automatically using the -hostkey=* switch of the open command:
open -hostkey=*
You lose any security by doing that, but you are already, so it makes no difference.

Resources