Passing multiple values as parameter to teradata view [duplicate] - teradata

This question already has an answer here:
Teradata variable with list of values
(1 answer)
Closed 3 years ago.
We are trying to pass values from report to teradata view as parameter. How do we pass multiple values to teradata view ?
AND (v_fact_xyz in (?) or 'ALL' in (?))
is the line of code written currently
where ? can be single value('Abd, EFG(ORM)') or multiple values like these
The report is working fine with single parameter passed but throws error while passing multiple values
.net data provider for teradata 110083 error.
A Null has been specified as the value for a parameter

If I understand correctly, your question is how to pass multiple values to your IN clause, something like this:
SELECT *
FROM MyView
WHERE v_fact_xyz IN ('Abd','EFG(ORM)','AnotherValue')
If that's the case, one way to do it is using a "split" UDF (user-defined function) to convert your parameter string into a format that the IN clause supports. The IN clause can take record-sets or a single value, but not a comma-separated list.
This page may give you some ideas:
https://web.archive.org/web/20211020153409/https://www.4guysfromrolla.com/webtech/031004-1.shtml
Also check to see if Teradata offers any built-in "split" or "delimited" UDFs which you can use to do this.

Related

SQLite C API equivalent to typeof(col)

I want to detect column data types of any SELECT query in SQLite.
In the C API, there is const char *sqlite3_column_decltype(sqlite3_stmt*,int) for this purpose. But that only works for columns in a real table. Expressions, such as LOWER('ABC'), or columns from queries like PRAGMA foreign_key_list("mytable"), always return null here.
I know there is also typeof(col), but I don't have control over the fired SQL, so I need a way to extract the data type out of the prepared statement.
You're looking for sqlite3_column_type():
The sqlite3_column_type() routine returns the datatype code for the initial data type of the result column. The returned value is one of SQLITE_INTEGER, SQLITE_FLOAT, SQLITE_TEXT, SQLITE_BLOB, or SQLITE_NULL. The return value of sqlite3_column_type() can be used to decide which of the first six interface should be used to extract the column value.
And remember that in sqlite, type is for the most part associated with value, not column - different rows can have different types stored in the same column.

Is there a way to display dynamic columns in Oracle apex

Long story short, I can't use pivot for this task due to the long elements that I need to include in the columns. Although I tried to create a Classic Report based on function in Oracle Apex. The query it's generated correctly but it's not working in the Classic Report.
A general hint first: Output your variable l_sql to your console using dbms_output.put_line or use some kind of debugging table where you can insert it into. Also be careful about the data type of that variable. If you need to expand the SQL you can reach a point where you need to use a CLOB variable instead of varchar2.
You will need to supply table structures and test data if you like to have your problem analyzed completely, therefore I will at first give you some general explanations:
Use Generic Column Names is ok if you have a permanent, unchangable amount of columns. But if the order of your columns or even the amount can change, then this is a bad idea, as your page will show an error if your query results in more columns than Generic Column Count
Option 1: Use column aliases in your query
Enhance your PL/SQL Function Body returning SQL Query in a way that it outputs verbose display names, like this:
return 'select 1 as "Your verbose column name", 2 as "Column #2", 3 as "Column #3" from dual';
That looks like this:
It has the disadvantage that the column names also appear in this way in the designer and APEX will only update these column names if you re-validate the function. You will have a hard time to reference a column with the internal name of Your verbose column name in a process code or dynamic action.
However it still works, even if you change the column names without telling APEX, for example by externalizing the PL/SQL Function Body into a real function.
Option 2: Use custom column headings
A little bit hidden, but there is also the option of completely custom column headings. It is almost at the end of the attributes page of your report region.
Here you can also supply a function that returns your column names. Be careful that this function is not supposed to return an SQL query that itself returns column names, but instead return column names seperated by a colon.
With this method, it is easier to identify and reference your columns in the designer:
Option 3: Both of it
Turn off Generic Column Names, let your query return column names that can be easily identified and referenced, and use the custom column headings function return verbose names for your users.
My personal opinion
Im using the 3rd option in a production application where people can change the amount and order of columns using shuttle items on the report page themselves. It took some time, but now it works like a charm, like some dynamic PIVOT without PIVOT.

Using custom number of parameters while using Call db procedure step in Pentaho (PDI)

Description:
Recently I've been trying to automatize some tasks at work using Pentaho (PDI), and I've come upon a problem that I had no luck of solving/finding solution for (I did research for many hours, been trying to solve it on my own as well). My aim is to load a text file containing name of the PL/SQL procedure stored on the server, and custom ammount of parameters for the procedure. For example if the source text file would contain following text:
Test_schema.job_pkg.run_job;12345
It should run job_pkg.run_job procedure from the defined connection, and use 12345 as a single parameter.
The problem:
The Call DB procedure transformation step only accepts SET ammount of parameters, for exampe I set the step to accept 4 parameters, but the procedure I'm calling is only accepting 1 parameter. I want to be able to IGNORE other parameters set in the step. When I try to send for example just one parameter but the step is set to accept 4 parameters, it throws:
Call DB Procedure.0 - ORA-06550: row 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'RUN_JOB'
ORA-06550: row 1, column 7: PL/SQL: Statement ignored
What I have so far:
I've made a job that starts the transformation that loads the contents of the source file to memory, splits it to correct fields using Modified Java Script value, sets Pentaho variables with extraced values, then second transformation is loaded, that reads these variables and passes them as fields to Call DB procedure step. The last step always fails unless I manually remove all unused arguments.
Solution:
Based on AlainD's answer I've tried to use the Switch / Case step which solved the problem. Now there is different problem regarding conversion of values. If I pass a number but set it as STRING in Call DB Procedure's parameters, it throws
ORA-01403 no data found
This can be solved by handling the data via Modified Java Script Value step or any other step in order to convert the data into the "correct" format.
What I do in cases like that is to build a SQL command in a String, something like Test_schema.job_pkg.run_job(12345) and execute it with an Execute SQL script.
An other workaround would be to cont the number of parameters in the Modified Javascript step, and use a Switch/Case to redirect the flow on a sequence of DB Procedure steps: one with 0 parameter, one with 1 parameter, one with 2 parameters,... This method assume that the max number of parameters is small.

Can a prepared statement use multiple values [duplicate]

This question already has answers here:
PreparedStatement IN clause alternatives?
(33 answers)
Closed 5 years ago.
Consider an SQL statement like
Select * from items where id in (123,456,789)
Can I use a prepared statement like
Select * from items where id in ?
and then supply the parameter as a set or list? Maybe I'd need parentheses around the "?".
I'm planning to use this in R, but I guess it's a general query for JDBC.
Two partial work-arounds:
Create a function that changes in (?) to in (?,?,?), depending on the length of the supplied list of values, then break that array into individual values for binding.
Pros: one query; binding is straight-forward
Cons: not feasible with large lists; have to wrap your queries in query-manglers, not fool-proof
Upload values to a temp table and change your query to
select * from items where id in (select val from temptable)
Pros: deal with arbitrary number of values; no need to trick SQL; binding is just as one would do for a multi-row insert
Cons: multiple calls; requires temp table and clean-up; might be problematic integrating with more complex queries (??)

SINGLEVALUEQUERY and MULTIVALUEQUERY with Pentaho Report Designer

I have multiple data sets that drive the Pentaho report. The data is derived from a handful of stored procedures. I need to access multiple data sources within the report without using sub reports and I believe the best solution is to create open formulas. The SINGLEVALUEQUERY I believe will only return the first column or row. I need to return multiple columns.
As an example here my stored procedure which is named HEADER in Pentaho (CALL Stored_procedure_test (2014, HEADER)), returns 3 values - HEADER_1, HEADER_2, HEADER_3. I'm uncertain of the correct syntax to return all three values for the open formula. Below is what I tried but was unsuccessful.
=MULTIVALUEQUERY("HEADER";?;?)
The second parameter denotes the column that contains the result.
If you dont give a column name here, the reporting engine will simply take the first column of the result. In the case of the MULTIVALUEQUERY function, the various values of the result set are then aggregated into a array of values that is suitable to be passed into a multi-select parameter or to be used in a IN clause in a SQL data-factory.
For more details see https://www.on-reporting.com/blog/using-queries-in-formulas-in-pentaho/

Resources