Password-less File transfer using RCP and Expect - unix

I have a requirement to transfer files from one server to another. I used RCP command to perform the same and it was working fine. Please find code below:
rcp tst.txt usrname#hostname:/home/username/destination_folder
I tried to automate the same using Expect command so created the below mentioned shell script:
#!/bin/bash
/usr/bin/expect -d<<EOD
spawn rcp tst.txt usrname#hostname:/home/username/destination_folder
expect "*userid#hostname's password:*"
send "mypassword\n"
EOD
I didn't get any error while I executing the shell script but the file was not transferred. Can someone help me figuring out what the issue is?
I have tried password less transfer through key generation but with no luck so I am trying the RCP approach.
Thanks in Advance,
Vijay

After sending the password, wait for the completion of rcp, by expecting for eof.
send "mypassword\r"
expect eof
If password-less, then, after spawning rcp, expect for eof.

Related

Minecraft console get wrong commands

I started a local server and want to add some simple commands with python, the server is running with forge 1.12 and a couple of mods.
My idea was it to catch wrong commands and send the right result instead.
An easy test command would be /echo Hello World with the result in the chat Hello World.
To get the command I am using the last line of the latest console log file, which is equal to the current console content. But in the console I cant read wrong commands. So if I run the echo command I get an message in the chat Unknown command. Try /help for a list of commands.
I think there could be two solutions:
Add in any register the command to get it in the console, prevent on this way the server to response and get the command in the console to use it.
Find a config to print also wrong commands in the console.
Thanks for helping
There is no way to 'cancel' commands through API, but there is a trick to effectively cancel commands anyways. You want to be listening to the Forge CommandEvent, modifying the command to another existing command that does nothing (you can create one yourself). This gives you a place to handle all commands (you'll have to filter for unexisting commands, otherwise you'd cancel all commands), and it will prevent the Unknown Command message from showing.

How to prevent "Execution failed:[Errno 32] Broken pipe" in Airflow

I just started using Airflow to coordinate our ETL pipeline.
I encountered the pipe error when I run a dag.
I've seen a general stackoverflow discussion here.
My case is more on the Airflow side. According to the discussion in that post, the possible root cause is:
The broken pipe error usually occurs if your request is blocked or
takes too long and after request-side timeout, it'll close the
connection and then, when the respond-side (server) tries to write to
the socket, it will throw a pipe broken error.
This might be the real cause in my case, I have a pythonoperator that will start another job outside of Airflow, and that job could be very lengthy (i.e. 10+ hours), I wonder if what is the mechanism in place in Airflow that I can leverage to prevent this error.
Can anyone help?
UPDATE1 20190303-1:
Thanks to #y2k-shubham for the SSHOperator, I am able to use it to set up a SSH connection successfully and am able to run some simple commands on the remote site (indeed the default ssh connection has to be set to localhost because the job is on the localhost) and am able to see the correct result of hostname, pwd.
However, when I attempted to run the actual job, I received same error, again, the error is from the jpipeline ob instead of the Airflow dag/task.
UPDATE2: 20190303-2
I had a successful run (airflow test) with no error, and then followed another failed run (scheduler) with same error from pipeline.
While I'd suggest you keep looking for a more graceful way of trying to achieve what you want, I'm putting up example usage as requested
First you've got to create an SSHHook. This can be done in two ways
The conventional way where you supply all requisite settings like host, user, password (if needed) etc from the client code where you are instantiating the hook. Im hereby citing an example from test_ssh_hook.py, but you must thoroughly go through SSHHook as well as its tests to understand all possible usages
ssh_hook = SSHHook(remote_host="remote_host",
port="port",
username="username",
timeout=10,
key_file="fake.file")
The Airflow way where you put all connection details inside a Connection object that can be managed from UI and only pass it's conn_id to instantiate your hook
ssh_hook = SSHHook(ssh_conn_id="my_ssh_conn_id")
Of course, if your'e relying on SSHOperator, then you can directly pass the ssh_conn_id to operator.
ssh_operator = SSHOperator(ssh_conn_id="my_ssh_conn_id")
Now if your'e planning to have a dedicated task for running a command over SSH, you can use SSHOperator. Again I'm citing an example from test_ssh_operator.py, but go through the sources for a better picture.
task = SSHOperator(task_id="test",
command="echo -n airflow",
dag=self.dag,
timeout=10,
ssh_conn_id="ssh_default")
But then you might want to run a command over SSH as a part of your bigger task. In that case, you don't want an SSHOperator, you can still use just the SSHHook. The get_conn() method of SSHHook provides you an instance of paramiko SSHClient. With this you can run a command using exec_command() call
my_command = "echo airflow"
stdin, stdout, stderr = ssh_client.exec_command(
command=my_command,
get_pty=my_command.startswith("sudo"),
timeout=10)
If you look at SSHOperator's execute() method, it is a rather complicated (but robust) piece of code trying to achieve a very simple thing. For my own usage, I had created some snippets that you might want to look at
For using SSHHook independently of SSHOperator, have a look at ssh_utils.py
For an operator that runs multiple commands over SSH (you can achieve the same thing by using bash's && operator), see MultiCmdSSHOperator

sqoop2 client batch mode script example

Can anyone show me an example of script that can be run from sqoop2 client in batch mode?
I refered http://sqoop.apache.org/docs/1.99.2/Sqoop5MinutesDemo.html
and it says we can run sqoop2 client in batch mode using the following command
sqoop.sh client /path/to/your/script.sqoop
but that script.sqoop isn't like sqoop1 script, so how should it be?
Batch file is nothing but a list of the same commands you would otherwise type in interactive mode (plus comment lines starting with pound sign).
However! Some commands require manual input, thus cannot be easily fully automated (e.g., 'create link' command). See this thread for details.

How to get the logs from a continuesly running child process in powershell?

I am calling child process from a process in Powershell.
The child process will not end, it will be running continuously in the background.
I need the logs entered by the child process continuously.
process.standardoutput.rradtoend() will enter the logs into file when the child process ends.
But i need the logs continuously.
Please help me in this regard.
You can redirect the log to files as below:
start-process your_executable -ArgumentList "your arguments" -RedirectStandardOutput the_path_you_want_to_put_your_log -RedirectStandardError the_path_to_put_error_log
the logs of the child process will be written to the log files continuously, you can open the files to check them from time to time.
Edit
And I think Unix tail equivalent command in windows Powershell will do you more help, which tell you how to monitor the log file in real time, like use the Get-FileTail Cmdlet from PowerShell Community Extensions

have R halt the EC2 machine it's running on

I have a few work flows where I would like R to halt the Linux machine it's running on after completion of a script. I can think of two similar ways to do this:
run R as root and then call system("halt")
run R from a root shell script (could run the R script as any user) then have the shell script run halt after the R bit completes.
Are there other easy ways of doing this?
The use case here is for scripts running on AWS where I would like the instance to stop after script completion so that I don't get charged for machine time post job run. My instance I use for data analysis is an EBS backed instance so I don't want to terminate it, simply suspend. Issuing a halt command from inside the instance is the same effect as a stop/suspend from AWS console.
I'm impressed that works. (For anyone else surprised that an instance can stop itself, see notes 1 & 2.)
You can also try "sudo halt", as you wouldn't need to run as a root user, as long as the user account running R is capable of running sudo. This is pretty common on a lot of AMIs on EC2.
Be careful about what constitutes an assumption of R quitting - believe it or not, one can crash R. It may be better to have a separate script that watches the R pid and, once that PID is no longer active, terminates the instance. Doing this command inside of R means that if R crashes, it never reaches the call to halt. If you call it from within another script, that can be dangerous, too. If you know Linux well, what you're looking for is the PID from starting R, which you can pass to another script that checks ps, say every 1 second, and then terminates the instance once the PID is no longer running.
I think a better solution is to use the EC2 API tools (see: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ for documentation) to terminate OR stop instances. There's a difference between the two of these, and it matters if your instance is EBS backed or S3 backed. You needn't run as root in order to terminate the instance - the fact that you have the private key and certificate shows Amazon that you're the BOSS, way above the hoi polloi who merely have root access on your instance.
Because these credentials can be used for mischief, be careful about running API tools from a given server, you'll need your certificate and private key on the server. That's a bad idea in the event that you have a security problem. It would be better to message to a master server and have it shut down the instance. If you have messaging set up in any way between instances, this can do all the work for you.
Note 1: Eric Hammond reports that the halt will only suspend an EBS instance, so you still have storage fees. If you happen to start a lot of such instances, this can clutter things up. Your original question seems unclear about whether you mean to terminate or stop an instance. He has other good advice on this page
Note 2: A short thread on the EC2 developers forum gives advice for Linux & Windows users.
Note 3: EBS instances are billed for partial hours, even when restarted. (See this thread from the developer forum.) Having an auto-suspend close to the hour mark can be useful, assuming the R process isn't working, in case one might re-task that instance (i.e. to save on not restarting). Other useful tools to consider: setTimeLimit and setSessionTimeLimit, and various checkpointing tools (I have a Q that mentions a couple). Using an auto-kill is useful if one has potentially badly behaved code.
Note 4: I recently learned of the shutdown command in package fun. This is multi-platform. See this blog post for commentary, and code is here. Dangerous stuff, but it could be useful if you want to adapt to Windows. I haven't tried it, though.
Update 1. Three more ideas:
You could use .Last() and runLast = TRUE for q() and quit(), which could shut down the instance.
If using littler or a script that invokes the script via Rscript, the same command line functions could be used.
My favorite package of today, tcltk2 has a neat timer mechanism, called tclTaskSchedule() that can be used to schedule the execution of an expression. You could then go crazy with the execution of stuff just before a hourly interval has elapsed.
system("echo 'rootpassword' | sudo halt")
However, the downside is having your root password in plain text in the script.
AFAIK those ways you mentioned are the only ones. In any case the script will have to run as root to be able to shut down the machine (if you find a way to do it without root that's possibly an exploit). You ask for an easier way but system("halt") is just an additional line at the end of your script.
sudo is an option -- it allows you to run certain commands without prompting for any password. Just put something like this in /etc/sudoers
<username> ALL=(ALL) PASSWD: ALL, NOPASSWD: /sbin/halt
(of course replacing with the name of user running R) and system('sudo halt') should just work.

Resources