db2 create index but hit SQLSTATE=42703 - unix

I have a table created successfully.
1 of the column name is code and another 1 is "deleted".
Thus, I plan to use this 2 field to create its index. I am doing something like follow:
CREATE INDEX SADM.IDX_SC_IDX1 on SADM.SC ("code" ASC, "DELETED") ALLOW REVERSE SCANS;
This is working fine in my local. However, I hit this error in UAT:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0205N Column, attribute, or period "code" is not defined in
"SADM.SC". SQLSTATE=42703
I double check the table and confirm the "code" column or "deleted" is exist and same with my local.
I believe something wrong is inside but I cant find the root cause.
Kindly advise.

As per my comment. You are using double-quotes around the column names the column case (uppercase, lowercase) must match between the table-definition and the index definition.

Make sure to name the columns as they were created and are listed in the system catalog. Look for the column names in SYSCAT.COLUMNS (for most Db2 versions). If you don't use quotes, Db2 converts identifiers to uppercase by default. However, if you use quotes they always need to be referenced exactly as written.
"code" is different from "Code" or "COde" or CODE. Thus, check how the column is really named.

Related

How to Pass Parameter Values at Runtime in Informatica Mapping Parameter

I have a scenario like we need to load data from source file to target table from a particular date [like LOAD_DATE], So I’ll create a mapping parameter for LOAD_DATE and pass that in Source Qualifier query. My query looks like this.
SELECT * FROM my_TABLE where DATE >= ‘$$LOAD_DATE’
So here I need to pass parameter values for ‘$$LOAD_DATE’ from another external database. I know that I need to pass the values from the Parameter file.
But my requirement is not to hardcore the values in the Parameter file but to feed it in runtime from another database. I will appreciate your help and thoughts on this.
You dont have to hardcode.
You can do it like this -
option 1. Create a mapping to create the param file in particular format.
Read for the other DB.
In expression transformation create below port which will generate actual param string. Pls note, we need to add new line so its recognized like a actual param file.
out_str = '[<<name of folder . name of workflow or sessoin>>]' || chr(12) ||
'$$LOAD_DATE='|||| CHR(39) ||<<date value from another DB>>|| CHR(39)
Then link above port to a flat file target. Name the output file as session_param.txt or whatever suitable. Pls make sure the parameter is generated correctly.
Use above file as a parameter file in your actual workflow.
Option 2 - You can join another table with original table flow. This can be difficult and need to change existing mapping.
Join the another table from another DB with main table based on a dummy condition. make sure you get distinct values of LOAD_DATE from another table. Make sure you always have 1 value from this DB.
Once you have the LOAD_DATE field from another table, you can use it in filter transformation to filter the data.
After this point you can add your original mapping.
Whole mapping should be like this-
SQ_MAIN_TABLE ----------------------->|
sq_ANOTHER_TABLE --DISTINCT_LOAD_DT-->JNR--FIL on LOAD_DT --><<your mapping logic>>

Is there a way to display dynamic columns in Oracle apex

Long story short, I can't use pivot for this task due to the long elements that I need to include in the columns. Although I tried to create a Classic Report based on function in Oracle Apex. The query it's generated correctly but it's not working in the Classic Report.
A general hint first: Output your variable l_sql to your console using dbms_output.put_line or use some kind of debugging table where you can insert it into. Also be careful about the data type of that variable. If you need to expand the SQL you can reach a point where you need to use a CLOB variable instead of varchar2.
You will need to supply table structures and test data if you like to have your problem analyzed completely, therefore I will at first give you some general explanations:
Use Generic Column Names is ok if you have a permanent, unchangable amount of columns. But if the order of your columns or even the amount can change, then this is a bad idea, as your page will show an error if your query results in more columns than Generic Column Count
Option 1: Use column aliases in your query
Enhance your PL/SQL Function Body returning SQL Query in a way that it outputs verbose display names, like this:
return 'select 1 as "Your verbose column name", 2 as "Column #2", 3 as "Column #3" from dual';
That looks like this:
It has the disadvantage that the column names also appear in this way in the designer and APEX will only update these column names if you re-validate the function. You will have a hard time to reference a column with the internal name of Your verbose column name in a process code or dynamic action.
However it still works, even if you change the column names without telling APEX, for example by externalizing the PL/SQL Function Body into a real function.
Option 2: Use custom column headings
A little bit hidden, but there is also the option of completely custom column headings. It is almost at the end of the attributes page of your report region.
Here you can also supply a function that returns your column names. Be careful that this function is not supposed to return an SQL query that itself returns column names, but instead return column names seperated by a colon.
With this method, it is easier to identify and reference your columns in the designer:
Option 3: Both of it
Turn off Generic Column Names, let your query return column names that can be easily identified and referenced, and use the custom column headings function return verbose names for your users.
My personal opinion
Im using the 3rd option in a production application where people can change the amount and order of columns using shuttle items on the report page themselves. It took some time, but now it works like a charm, like some dynamic PIVOT without PIVOT.

Do not fail on missing column in a SQLLite query

I have a simple query like this:
SELECT * FROM CUSTOMERS WHERE CUSTID LIKE '~' AND BANKNO LIKE '~'
The problem is, the customers-table might or might not contain the BANKNO column depending on circumstances I've no control over. If however BANKNO is not a column in CUSTOMERS, this query fails.
So my question is: it is possible to test if the BANKNO column exists and if so, to include it in the query and if not to exclude this column?
The query really has to be flexible.
A non-existent column in a SELECT to sqlite3 will always fail.
One option might be to put the "full" sql in a try block, and if it errors, execute the other sql.
Or, you could query PRAGMA table_info('CUSTOMERS') and interrogate the result to see if a column in question is in the database. Find the sqlite doc here https://www.sqlite.org/pragma.html#pragma_table_info.
I'm sure there are other options, but the bottom line is you need to know before the sql is executed that it contains only valid column names.

oracle sql developer table column without quotations

I'm using SQL Developer to create oracle tables. I'm trying to create the columns without quotes, but after creating the table, when I see the DDL, all columns and table names are in quotations. I want all the columns to be case-insensitive.
How do I do that? Please advise.
The context here is, I have my code in PHP. I'm migrating my backend from MySQL to Oracle. While using MySQL, I referenced all my table columns in lower case. But it looks like OCI_FETCH_ARRAY returns the data in Uppercase columns.
So do I have to change my PHP code to use Uppercase or is there any other alternative? I have hell lot of code to change!!
Ahh, finally figured this out. Yes I agree, quotations dont always make an object case-sensitive, but my problem was OCI_FETCH_ALL, OCI_FETCH_ARRAY etc retrieved the table columns in Upper case, whereas I wanted them in lower case. The following statement is a workaround for the issue. it converts the columns into lower case.
$data_upper = oci_fetch_assoc($data_res);
$data = array_change_key_case($data_upper, CASE_LOWER);
Thanks!!
Quotation marks don't always make an object case-sensitive. Objects in all upper-case are always case-insensitive, even if they are surrounded by quotation marks.
SQL> create table test1("QUOTES_DONT_DO_ANYTHING_HERE" number);
Table created.
SQL> select quotes_DONT_do_ANYTHING_here from test1;
no rows selected
You normally only see quotation marks because some tools automatically add them to everything.

MS SQL Server Data Tools Conditional Split: Check if row is already in DB

I need to create a data flow for an existing MS SSDT project that inports a flat CSV file into an existing database table. So far so good.
However I would like to reject all entries where the column "code" match values already stored in the db. Even better, if possible, in the case that the column "code" maches an entry in the database, I would like to update the column "description". The important thing is that under no circumstances should duplicate code entries be created.
Thanks
Ok so seeing as I figured it out, I thought someone else might find it useful:
The short answer is that a lookup is needed between the data source and destinations. This "lookup" will filter between matches that need updating and new values that need to go straight into a new table row (see image).
Values that match the database and need updating to the description need to be fed into an "OLE DB command".
Within the "lookup" component we need to do the following:
Go to the general tab and select Redirect rows to no match output
Go to the connection tab and insert the following SQL:
SELECT id, code FROM tableName
Go into the "Columns" tab and check the "id" column on the "Available lookup Columns" table. Also chech the "code" column and drag it to its corresponding "Available Inputs Columns" counterpart to map them to eachother so that the look up can compare them.
-- At this point if you get an error caused by the mapping, try to replace the code in setep 2 with:
SELECT id, CAST(code AS nvarchar(50)) AS code FROM tableName
In the Error Output, ensure that id under "Lookup Match Output" has a description of "Copy Column"
Now we need to configure the "OLE DB command" component:
Go to the "Connection Managers" tab and ensure the component is connected to the desired DB
Go to "Component Properties" and add the following code to the "SQLCommand" property:
UPDATE tableName SET description = ? WHERE id = ?
Note the "?". It is supposed to be there to indicate a parameter must be added to the "Column Mappings" tab, do not replace them.
Finally go into the "Column Mappings" tab and map Param_0 (the first ?) to the "description" column and "Param_1" to the "id" column. No action is needed on the "code" or any other column the db table may contain.
Now give yourself a big pat on the back for having completed a task, that in SQL would normally be one line of code, in about 10 time-consuming steps ;-)

Resources