robot framework: telnet execute command "Prompt is not set" - robotframework

Trying to run this piece of code, but a "Prompt is not set" error keeps occurring at the Execute Command line.
*** Settings ***
Library Telnet
Library Telnet ${out}
Library Collections
Library Collections ${y}
Library Collections ${x}
*** Variables ***
${ip} 0.0.0.0
${port} 0
*** Test Cases ***
telnet to server
Open Connection ${ip} ${port}
verify something
${out}= Execute Command ls
${y}= Get From List ${out} 0
Should Match Regexp ${y} /^ID$/
Exit Test
Close All Connections
I have also tried deleting "Library Telnet ${out}" and replacing the " ${out}=
Execute Command ls" line with the following, but receive the same error.
Write ls
Set Prompt ${out}
${out}= Read Until Prompt
Is there a problem with the syntax? Or, is the usage of the "prompt" completely wrong? (if so, how can i fix this?)
(note: this is a first attempt at robot framework, so please feel free to comment on any other problems!)

Everything is in the Telnet docs. I use RED Robot Editor which can show me docs for Telnet and Telnet KW by hover over Telnet entry in editor, this can also be generated via command line:
python -m robot.libdoc Telnet show
There is a part about Prompt:
== Prompt ==
Often the easiest way to read the output of a command is reading all
the output until the next prompt with `Read Until Prompt`. It also makes
it easier, and faster, to verify did `Login` succeed.
Prompt can be specified either as a normal string or a regular expression.
The latter is especially useful if the prompt changes as a result of
the executed commands. Prompt can be set to be a regular expression
by giving ``prompt_is_regexp`` argument a true value (see `Boolean
arguments`).
Examples:
| `Open Connection` | lolcathost | prompt=$ |
| `Set Prompt` | (> |# ) | prompt_is_regexp=true |
Check Telnet docs for more help and examples.
ps. I don't see reason to import Telnet with parameter:
Library Telnet ${out}

Although I am too late to answer, I did not see the exact expected answer hence answering this now.
You need to set 2 things in Open Connection.
prompt_is_regexp=yes
prompt=#{Your expected prompt}

Related

What is the purpose of the -char parameter in SQL Explorer?

I recently started using Progress OpenEdge and am confused about the purpose of the -char parameter to the sqlexp (SQL Explorer) command from Progress's command line utility, Proenv.
I have looked at the documentation here, but apparently Progress didn't feel that parameter should be documented. I've also looked in the Progress Knowledge Base but can't find an actual definition for the -char parameter.
For example, I see no difference between the commands sqlexp -char -db C:\pathtomydb\mydb.db -H 127.0.0.1 -S 2500 -user jmoor -password *** and sqlexp -db C:\pathtomydb\mydb.db -H 127.0.0.1 -S 2500 -user jmoor -password ***
Both commands seems to do the exact same thing. Even if I run an actual SQL command such as SELECT * FROM PUB.CUSTOMER WHERE "Cust-id" = 15; using the -command parameter, the -char parameter seems to make no difference.
It is obsolete since 10.1A, see https://knowledgebase.progress.com/articles/Article/P92359
Help says the following but I can't see any way to configure it. I checked sqlexp.bat and it is not using GUI class in any manner. Looks like it is an overlook. You can ignore it.
Usage: sqlexp [-modeoptions] [-connectoptions] [-generaloptions]
where mode options include:
-char Optional argument. Default is GUI mode.

How to open a command line terminal and execute some commands inside robot framework testcase?

I Want to do the following steps:
open a terminal of the same ubuntu machine from where my Robot testcase is running and execute some commands.
written a Robot framework testcase as shown below:
*** Settings ***
Library Telnet
*** Testcases ***
testcase1
open connection 127.0.0.1
write gnome terminal
write ifconfig -a eth0
But its throws "Errno 111 - connection refused" error.
Kindly guide me if anybody have idea on this.
Thanks for your help in advance.
If you don't actually need to open a terminal window, robot has a Process library that lets you run external commands via the Run process keyword. For example:
*** Settings ***
| Library | Process
*** Test cases ***
| Example
| | Run process | ifconfig | -a | eth0
The answer here is twofold
In most (all including Ubuntu) Linux distributions Telnet is closed by default. This is probably true for your case as well.
You could run the telnet server on the Ubuntu machine, or even configure it to run on startup (There are many threads on how to do that).
But as other people said before - running Telnet on your local machine is probably not really what you want. You can use the Process library to run processes on your local host,and even the built in library has a few keywords for that.
Create .bat file and write your commands in that. If your .bat file located in other folder then use cd commands and then your required commands
Example of bat file like
cd C:\robotFramework\runner
java abc.class
Use following syntex
Run xyz.bat : for this use Library OperatingSystem
Or
Run Process xyz.bat : for this use Library Process

nohup - dont want nohup.out but want log going to a different file on the remote server

I'm running the following command (where variables have valid values for ssh command and $file - is a .sql file).
nohup ssh -qn ${ssh_user}#${dbs} "sqlplus $dbuser/${dbpswd}#${dbname} <<ENDSQL | tee "${sql_run_output_file}".ssh.log
set echo off
set echo on
set timing on
set time on
set serveroutput on size 1000000
#${file}
ENDSQL
"
When I was using the above command without "nohup" before ssh command, after 1 hour or so, my connection from source server (where im running ssh) was getting an error/message "Connection reset...." and hanging my BASH shell script (which contains this ssh command in it). When, I use nohup, i dont see the connection issue.
Here's what I'm trying to get and need your help.
Change the command shown above so that the command will NOT create a nohup.out
(Did I read that I can use > instead of | tee ... and use 2>&1)
I DO NOT want to run the command giving a "&" (background)
I DO want a LOG file for the sqlplus session that's running on the target DB server via ssh command/connection (initiated from source server).
Thanks.
You can still lose the connection when running ssh under nohup, so it's not really a good solution. If possible, I would recommend that you copy the sql file via scp to the target server, then ssh in to the server, open a screen and run the command from there (Or run it under nohup). Is that an option?

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Unix: Grep on console output

This is my first question on stackoverflow!
I want to have a unix script that will run grep on the console output. Here is what my script does:
1. Telnet into a remote server (I have done this part successfully)
2. On successful login, the remote server displays outputs information on the console. I need to run grep on that console output (need help with this)
So, I need a script to run grep on the output appearing on the console.
Any thoughts??
Thanks,
Puneet
Use SSH instead. It's more secure and far easier to script.
ssh remoteusername#remotehost:/path/to/remote/script | grep 'something'
with appropriate key setup, it won't even prompt you for a password.
Have you tried I/O redirection? You could either do
$ your-command > output.txt
and then run grep on that file, or just directly pipe the output through grep like so
$ your-command | grep ...
See this article or google around for similar. There are probably thousands of good articles about this around the web.
Instead of telnet, I would suggest using netcat (nc). You could then pass your login credentials via standard input and grep the standard output (nc prints anything sent by the server on standard output).
nc <host> <port> <auth.txt | grep 'string'
What you want to do is probably using a pipe. You can probably see it in the above answers it's the | sign you see in the command. It may be difficult to locate on your keyboard, depending on the layout. (I have to admit it is not very often used).
Pipes will redirect the output of one command. Instead of sending it to the console, they will send it as an input of another command.
cmd1 | grep foo is equivalent to running grep foo on the output of cmd1 (you can replace cmd1 by your netstat command).
One last thing is that you can have as many pipes as you want. For instance on my machine I can run ls -ltr | tail -1 | awk '{print $9}' | grep foo to look for the word foo in the last modified file.

Resources