If I am writing a program (in my case a python program using paramiko) that executes multiple Tasks on a remote Server, for example, remove some files, then later run a program, then later move some files, is it better to make a single ssh Client and leave it open for the duration, or should I treat ssh Connections more like files and databases and open and Close them each time I execute a Task?
Related
The command ls -lu script_name.sh only gives last access time of script.
Is there any way to determine.
Processes in Linux do not normally leave traces after they terminate, unless they create or modify files, write syslog messages, or audit subsystem is on and it keeps track of exec* calls.
I want to upload every 30 seconds via SFTP (using WinSCP commands in MATLAB). Script is running fine: connecting, synchronizing, closing.
winscp.com /command "open xx#xxx.com/dir" "synchronize remote -mirror dir" "exit"
Now: for this continuously script, is it smarter to reconnect (and close after finished) every time or is there no problem just staying connect and synchronize every 30 sec?
WinSCP does not have any "pause" command nor any kind of loop control structure. So you cannot stay connected, using just the simple WinSCP scripting interface.
You would have to use some more advanced technique, such as WinSCP .NET assembly and code the loop/pause in PowerShell or other language.
To actually answer your question: I do not think it really matters. Solution with reconnecting is definitely easier to implement. If you wanted to stay connected, you would have to implement also reconnection (in case connection is lost).
I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.
I have always used the os:cmd/1 method to call operating system routines. Now, i know that erlang has an ssh application. I would like to know how i can use this module to ssh into a SOLARIS server, run a command and collect the reply. I believe that such an operation would be handled asynchronously. I need an example using the ssh application built into Erlang doing this:
Now, at times we setup SSH KEYS between servers to prevent password prompt especially if one is using a script to execute tasks on remote servers. i am intending to write many Erlang programs or escripts that will interact with many remote servers within our environment. i need a complete example and explanation on how ssh with and/or without password prompt can be handled using erlang ssh application. NOTE: In the screen shot above, the two servers had SSH KEYS set up and so there is no password prompt when ssh is initiated from any of the two.
The correct erlang native API to achieve this is not ssh, which only implements a user-interactive shell for ssh, but instead use ssh_connection. Take a look at ssh_connection:exec/4
To be more complete, use ssh:connect to establish a connection and then using the handler returned from it to connect with ssh_connection:exec/4
I didn't try it myself and can't provide a complete example but the documentation seems to be a good starting point.
Every month we send reports to a server using FTP. We run a query on a database to create the files then use the ftp functionality in LabVIEW to do the transfer. This runs on a Windows system.
This works fine but now we have to switch to using SFTP and the CopSSH package has been recommended. As LabVIEW has no native SFTP functionality we are looking at how we can use the sftp.exe application from CopSSH.
From the command prompt we have set up the encryption and made the initial connection using sftp username#host and entered the password. This has been confirmed by the team on the server side so connection to the server is set up. Now we just use sftp username#host and no password is required.
Where we are struggling is how to initiate the transfer from our LabVIEW code. We are able to call system commands using the System Exec VI but is there a way to pass a list of functions to the SFTP executable?
The commands used to transfer the files when we type it at the command prompt are:
sftp username#host
put c:/Data/File1.txt remoteFile1
put c:/Data/File2.txt remoteFile2
put c:/Data/File3.txt remoteFile3
quit
This works from the command prompt but I am looking to just call the sftp executable with a list of files to transfer. I don't think this would be specific to LabVIEW as you could use a batch file to run from a scheduled job.
LabVIEW can call ActiveX and .net but we really need to use this specific application.
I have been using WinSCP which has a command line version, winscp.com. It supports sftp and allows synchronize, keepuptodate, get, put and delete on folders and files. One word of warning, keepuptodate depends on an unbroken connection. Although WinSCP can remake a connection automatically, keepuptodate cannot. I suspect it is based on Microsoft's .NET SystemIO FileSystemWatcher. I therefore do a regular synchronize to keep a mirror of my source folder tree on the remote target.
If copssh's sftp.exe is a command line utility, and System Exec in your version of LabVIEW has the 'standard input' terminal (present at least since 8.5), you should be able to simply wire the commands you want sftp.exe to run into the standard input terminal.
If that doesn't work for some reason, could you use PuTTY instead of copssh? The documentation for PuTTY's PSFTP component says that it can execute a sequence of commands in a script file using the -b command line switch, e.g.
psftp user#hostname -b myscript.scr
so you could have your LabVIEW program create the script file then run it with System Exec.
You are mixing SSH and SFTP. SSH opens a secure connection, but SFTP is a separate protocol which is run over SSH connection and requires a separate tunnel. In OpenSSH (and it's Windows Port, copSSH) it's sftp.exe application that does SFTP.
Now about FTP vs SFTP. Please check an article that explains the difference between SFTP and FTP(S). If LabView supports FTP, this doesn't help you when you need to perform SFTP transfers.
I don't know whether you can use external ActiveX controls in LabView. If you can, you are welcome to check our SFTP ActiveX control, that will let you do the transfer. If all you can do is call external application, then you'd have to use copSSH's sftp.exe.