I'm trying to run a procedure on appserver which is set up on localhost.
**'testProc' was not found. (293)
DEFINE VARIABLE hndle AS HANDLE NO-UNDO.
DEFINE VARIABLE tmp AS CHARACTER NO-UNDO.
CREATE SERVER hndle.
PROCEDURE testProc:
DEFINE OUTPUT PARAMETER o_tmp AS CHARACTER INITIAL "HELLO".
END PROCEDURE.
hndle:CONNECT ("-AppService AppServiceName-H localhost").
RUN testProc ON hndle(OUTPUT tmp).
hndle:DISCONNECT ().
DELETE OBJECT hndle.
You can't run internal procedures on an appserver. You have to put the code into its own .p file and run that on the appserver. The .p has to be available in the Propath of the appserver as well.
You're trying to run the internal procedure 'testProc', not the procedure file 'testProc.p'. They have to be two separate files. Create a 'testProc.p' file on your appserver and put your logic in it:
DEFINE OUTPUT PARAMETER o_tmp AS CHARACTER INITIAL "HELLO".
In a separate file, put your code that calls testProc.p:
DEFINE VARIABLE hndle AS HANDLE NO-UNDO.
DEFINE VARIABLE tmp AS CHARACTER NO-UNDO.
CREATE SERVER hndle.
hndle:CONNECT ("-AppService AppServiceName -H localhost").
RUN testProc.p ON hndle(OUTPUT tmp).
hndle:DISCONNECT ().
DELETE OBJECT hndle.
MESSAGE tmp VIEW-AS ALERT-BOX INFORMATION.
Note that your calling program is running testProc.p, not testProc. Run this code and you should get a pop-up message saying "HELLO".
Related
Is there a way to connect to multiple progress database.
In current situation what we do is we use multiple .p files to fetch the data.
Example:
1st program will fetch the data from customer database and using a run command we use to connect to 2nd database.
2nd program we use input parameter to map the value.
My question is, is there a way we can do this in one single program?
Below is the sample program:
/*FIRST Program***/
FIND FIRST customer WHERE customer.cust-id EQ "v456" NO-LOCK NO-ERROR.
IF AVAILABLE customer THEN
RUN /HOME/dbconnect.p(INPUT customer.cust-id, "ACCOUNTS").
RUN /HOME/program-2.p (INPUT customer.cust-id).
/Second Program**/
DEFINE INPUT PARAMETER ipcCust-id AS CHARACTER.
FOR EACH billing WHERE billing.cust-id EQ ipcCust-id NO-LOCK:
DISPLAY billing.DATE.
END.
You can use the CONNECT statement to connect to databases at runtime. You cannot CONNECT and access the newly connected db in the same procedure - the code that uses the new connection must run in a sub-procedure.
You might, for instance, do something like this:
define variable dbList as character no-undo.
define variable i as integer no-undo.
define variable n as integer no-undo.
dbList = "db1,db2,db3".
n = num-entries( dbList ).
do i = 1 to n:
connect value( entry( i, dbList )) no-error.
run "./p1.p".
current-language = current-language. /* forces the r-code cache to be cleared */
disconnect value( entry( i, dbList )) no-error.
end.
and p1.p:
/* p1.p
*/
define variable i as integer no-undo.
for each _field no-lock: /* count the number of fields defined in the schema */
i = i + 1.
end.
display pdbname(1) i.
pause.
p1.p is just a silly little program to demonstrate that the data access is actually coming from 3 distinct databases.
The "current-language = current-language" thing is important if the same procedure will run against several different databases. Without that little nugget the procedure may be cached and it will remember the previous db that it was connected to.
Or if you prefer Stefan's dynamic query approach:
define variable dbList as character no-undo.
define variable i as integer no-undo.
define variable n as integer no-undo.
define variable b as handle no-undo.
define variable q as handle no-undo.
create query q.
dbList = "db1,db2,db3".
n = num-entries( dbList ).
do i = 1 to n:
connect value( entry( i, dbList )) no-error.
create buffer b for table "_field".
q:set-buffers( b ).
q:query-prepare( "preselect each _field no-lock" ).
q:query-open().
display pdbname( 1 ) q:num-results.
pause.
q:query-close.
delete object b.
disconnect value( entry( i, dbList )) no-error.
end.
Is connecting dynamically (inside the .p) a requirement ?
If not, it's probably easier to just connect the databases when launching the program...
(I'm on windows, but should be recognizable)
prowin -pf <path-to>\connect.pf -p <path-to>\program.p
where connect.pf contains:
-db <connection settings for db 1>
-db <connection settings for db 2>
<...>
With static queries, no. You always need to have the database connected before running the .p. With dynamic queries, since there is no reference to the database in the r-code, you can do whatever you want from a single .p.
I am trying to test against multiple environments with a single test case using the passing in of a variable from the command line. Using the following command line:
robot --variable TESTENV:prod advertisingdisclosure_Page.robot
I need to test the value of TESTENV and depending on the value passed in set a different variable, specifically a URL, to the appropriate value. With the following code in the first keyword section of the test case I get an error:
IF ${TESTENV} == "uat"
$(MAIN_URL)= Set Variable ${env_data['uat_url']}
ELSE IF ${TESTENV} == "dev"
${MAIN_URL}= Set Variable ${env_data['dev_url']}
ELSE IF ${TESTENV} == "prod"
${MAIN_URL}= Set Variable ${env_data["prod_url"]}
ELSE
Fail "No URL specified"
END
I get the following error:
Evaluating expression 'prod' failed: NameError: name 'prod' is not defined nor importable as module
The examples I have found show how to use the Global Variable directly, but not how to evaluate it for a specific value.
Help.
Jeff
You need to put quotes around the variable name - otherwise the framework just puts its value in, and passes that expression to python.
Thus it becomes prod == '"uat", and py rightfully complains there is no defined variable prod.
The fix is simple - just surround the usage in quotes, and after substitution this will become a string comparison:
"${TESTENV}" == "uat"
Alternatively you can use another syntax - no curly brackets, which tells RF to use/pass the variable itself, not its value, and then it will be a defined one for python:
$TESTENV == "uat"
I'm trying to create a stored function in a MariaDB database.
I copied the function I'm trying to create from the MariaDB Docs:
DELIMITER //
CREATE FUNCTION FortyTwo() RETURNS TINYINT DETERMINISTIC
BEGIN
DECLARE x TINYINT;
SET x = 42;
RETURN x;
END
//
DELIMITER ;
Unfortunately, I get the following error:
SQL Error [1064] [42000]: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 3
What baffles me most is that the given code is supposed to resolve the very error code I'm getting according to the MariaDB docs
The solution is to specify a distinct delimiter for the duration of the process, using the DELIMITER command
It turned out, the client I used to issue the command, DBeaver, was causing the trouble.
After switching over to MySqlWorkbench everything worked as expected.
Apparently, DBeaver didn't recognise the delimiters correctly..
I think you may have forgotten to select the database you want this routine to be stored into.
So try adding a use as the first line
use `test`;
DELIMITER //
CREATE FUNCTION FortyTwo() RETURNS TINYINT DETERMINISTIC
BEGIN
DECLARE x TINYINT;
SET x = 42;
RETURN x;
END
//
DELIMITER ;
Is there a way to access all the variables/arguments passed through the command line or variable file (-V option) during robotframework execution. I know in python the execution can access it with 'sys.args' feature.
The answer for getting the CLI arguments is inside your question - just look at the content of the sys.argv, you'll see everything that was passed to the executor:
${args}= Evaluate sys.argv sys
Log To Console ${args}
That'll return a list, where the executable itself (run.py) is the 1st member, and all arguments and their values present the in the order given during the execution:
['C:/my_directories/rf-venv/Lib/site-packages/robot/run.py', '--outputdir', 'logs', '--variable', 'USE_BROWSERSTACK:true', '--variable', 'IS_DEV_ENVIRONMENT:false', '--include', 'worky', 'suites\\test_file.robot']
You explicitly mention variable files; that one is a little bit trickier - the framework parses the files itself, and creates the variables according to its rules. You naturally can see them in the CLI args up there, and the other possibility is to use the built-in keyword Get Variables, which "Returns a dictionary containing all variables in the current scope." (quote from its documentation). Have in mind though that these are all variables - not only the passed on the command line, but also the ones defined in the suite/imported keywords etc.
You have Log Variables to see their names and values "at current scope".
There is no possibility to see the arguments passed to robot.
I want to create my own pipeline like in Unix terminal (just to practice). It should take applications to execute in quotes like that:
pipeline "ls -l" "grep" ....
I know that I should use fork(), execl() (exec*) and API to redirect stdin and stdout. But are there any alternatives for execl to execute app with arguments using just one argument which includes application path and arguments? Is there a way not to parse manually ls -l but pass it as one argument to execl?
If you have only a single command line instead of an argument vector, let the shell do the parsing for you:
execl("/bin/sh", "sh", "-c", the_command_line, NULL);
Of course, don't let untrusted remote user input into this command line. But if you are dealing with untrusted remote user input to begin with, you should try to arrange to pass actual a list of isolated arguments to the target application as per normal usage of exec[vl], not a command line.
Realistically, you can only really use execl() when the number of arguments to the command are known at compile time. In a shell, you'll normally use execv() or execvp() instead; these can handle an arbitrary number of arguments to the command to be executed. In theory, you use execv() when the path name of the command is given and execvp() (which does a PATH-based search for the command) when it isn't. However, execvp() handles the 'path given' case, so simply use execvp().
So, for your pipeline command, you'll end up with one child using something equivalent to:
char *args_1[] = { "ls", "-l", 0 };
execvp(args_1[0], args_1);
The other child will end up using something equivalent to:
char *args_2[] = { "grep", "pattern", 0 };
execvp(args_2[0], args_2);
Except, of course, that you'll have created those strings from the command line arguments instead of by initialization as shown. Note that grep requires a pattern to search for.
You've still got plumbing issues to resolve. Make sure you close enough pipe file descriptors. When you dup() or dup2() a pipe to standard input or standard output, you close both the file descriptors from the pipe() function.