I have a scenario like we need to load data from source file to target table from a particular date [like LOAD_DATE], So I’ll create a mapping parameter for LOAD_DATE and pass that in Source Qualifier query. My query looks like this.
SELECT * FROM my_TABLE where DATE >= ‘$$LOAD_DATE’
So here I need to pass parameter values for ‘$$LOAD_DATE’ from another external database. I know that I need to pass the values from the Parameter file.
But my requirement is not to hardcore the values in the Parameter file but to feed it in runtime from another database. I will appreciate your help and thoughts on this.
You dont have to hardcode.
You can do it like this -
option 1. Create a mapping to create the param file in particular format.
Read for the other DB.
In expression transformation create below port which will generate actual param string. Pls note, we need to add new line so its recognized like a actual param file.
out_str = '[<<name of folder . name of workflow or sessoin>>]' || chr(12) ||
'$$LOAD_DATE='|||| CHR(39) ||<<date value from another DB>>|| CHR(39)
Then link above port to a flat file target. Name the output file as session_param.txt or whatever suitable. Pls make sure the parameter is generated correctly.
Use above file as a parameter file in your actual workflow.
Option 2 - You can join another table with original table flow. This can be difficult and need to change existing mapping.
Join the another table from another DB with main table based on a dummy condition. make sure you get distinct values of LOAD_DATE from another table. Make sure you always have 1 value from this DB.
Once you have the LOAD_DATE field from another table, you can use it in filter transformation to filter the data.
After this point you can add your original mapping.
Whole mapping should be like this-
SQ_MAIN_TABLE ----------------------->|
sq_ANOTHER_TABLE --DISTINCT_LOAD_DT-->JNR--FIL on LOAD_DT --><<your mapping logic>>
Related
Is there any way I can find labels which are not used in D365 FO (labels which dont have references)?
The cross references are stored in database DYNAMICSXREFDB. You can use a sql query to generate a list of labels that have no references.
This query uses two tables in the database:
Names holds an entry for each object in the application that can be referenced.
The Path field of the table holds the name of the object (e.g. /Labels/#FormRunConfiguration:ViewDefaultLabel is the path of the ViewDefaultLabel in the FormRunConfiguration label file.
Field Id is used to reference a record in this table in other tables.
References holds the actual references that connect the objects.
Field SourceId contains the Id of the Names record of the object that references another object identified by field TargetId.
The actual query could look like this:
SELECT LabelObjects.Path AS UnusedLabel
FROM [dbo].[Names] AS LabelObjects
WHERE LabelObjects.Path LIKE '/Labels/%'
AND NOT EXISTS
(SELECT *
FROM [dbo].[References] AS LabelReferences
WHERE LabelReferences.TargetId = LabelObjects.Id)
Make sure to compile the application to update the cross reference data. Otherwise the query might give you wrong results. When I run this query on a version 10.0.3 PU27 environment, it returns one standard label as a result.
In APEX, when performing a Data Load (e.g. upload of a csv file into APEX application), is it possible to validate input data using a transformation rule?
For example, suppose to upload data about cars that have been sold this month.
The target table has the column car_manufacturer and num_car_sold.
The column car_manufacturer must accept only three values, say ('A1', 'A2', 'A3').
In a pseudo PLSQL, just to give an idea:
IF :car_manufacturer IN ('A1, A2, A3') then :car_manufacturer else <error>
How can I check this in the upload phase? Is it possible to use a transformation rule, in order that if it fails, it returns an error message? Other ways?
Thanks in advance.
You could put a constraint on the table definition as per the other answer, or if you only want the error message for when the Data Load is used, you can use a Table Lookup.
Go to Shared Components -> Data Load Definitions
Open the Data Load Definition that you want to edit
Create Table Lookup
Select the column (e.g. car_manufacturer)
Set the Table Lookup attributes to a table that contains the list of valid values (you'll need either a table or a view for this)
Leave Insert New Value set to No (If set to 'No' (the default) then a new record will not be created in the lookup table if the lookup column value(s) entered do not already exist. If set to 'Yes' then a record will be created in the lookup table using the upload column(s) and the Upload Key Column will be retrieved from the newly created record.)
Set Error Message to the message you want to return if a match is not found.
How about having a check constraint on the table for the column "car_manufacturer"?
ALTER TABLE TABLE_NAME
ADD CONSTRAINT CHECK_CAR_MANUFACTURER
CHECK ( CAR_MANUFACTURER in ('A1', 'A2', 'A3'));
Is it possible to provide a default value or a query to provide a value to an unmapped column in the target table using Redgate SQL Data Compare?
To explain the scenario I have a configuration database that holds settings data for several database instances. The data is all in the same shape, but the config database has an additional InstanceID field in most tables. This allows me to filter my compare to only compare against the InstanceID relating to the source Instance database. However if I generate Insert scripts they fail because the Target Instance ID fields are non nullable. I want to provide a default value that is then used in the Insert Scripts. Is this doable?
SQL Data Compare doesn't have an easy way of doing this I'm afraid.
There is one way to do it - you could create a view that selects everything from the source table along with a computed column, which just provides the "default value" that you want to insert. Then you can map the view to the table in the target database and compare them, deploying from the result.
I hope this helps.
I have a data file (.csv) which contains 10lacs records. I am uploading file data into my table TBL_UPLOADED_DATA using oracle SQL LOADER and control file concept.
I am able to upload all the data form the file to table smoothly without any issues.
Now my requirement is i want to upload only relavant data based on some criteria.
for example i have table EMPLOYEE and its columns are EMPID,EMPNAME,REMARKS,EMPSTATUS
i have a datafile with employee data that i need to upload into EMPLOYEE table.
here i want restrict some data that should not upload into EMPLOYEE table using sql loader. Assume restriction criteria is like REMARKS should not contain 'NO' and EMPSTATUS should not contain '00'.
how can i implement this. Please suggest what changes to be done in control files.
You can use the WHEN syntax to choose to include or exclude a record based on some logic, but you can only use the =, != and <> operators, so it won't do quite what you need. If your status field is two characters then you can enforce that part with:
...
INTO TABLE employee
WHEN (EMPSTATUS != '00')
FIELDS ...
... and then a record with 00 as the last field will be rejected, with the log showing something like:
1 Row not loaded because all WHEN clauses were failed.
And you could use the same method to reject a record where the remarks are just 'NO' - where that is the entire content of the field:
WHEN (REMARKS != 'NO') AND (EMPSTATUS != '00')
... but not where it is a longer value that contains NO. It isn't entirely clear which you want.
But you can't use like or a function like instr or a regular expression to be more selective. If you need something more advanced you'll need to load the data into a staging table, or use an external table instead of SQL*Loader, and then selectively insert into your real table based on those conditions.
I have a trigger ON UPDATE on a table that calls a custom function. In the custom function, I want to insert into a log table the name of the current SAVEPOINT (the deepest unsaved one) along with the name of the table and the timestamp. I am currently hardcoding the name of the table (please let me know if there is a better way) but I cannot figure out how to get the name of the current SAVEPOINT.
Thanks!
By default, SQLite has no functions to get the current savepoint (or the current trigger's table).
However, if you have compiled SQLite into your application, you could use sqliteInt.h, and, from a variable sqlite3 *db, access the current savepoint's name as db->pSavepoint->zName.
One way to determine current save point, without resorting to using sqlite3int.h and the sqlite3* db structure is to set up a compile time authorization callback (http://www.sqlite.org/c3ref/set_authorizer.html) and then look for an action code (http://www.sqlite.org/c3ref/c_alter_table.html) of SQLITE_SAVEPOINT.
The fourth parameter passed to the authorizer callback will be the name of a savepoint. By storing this name in your structure - you will have access to the name of the last save point passed in during analysis/preparation of your sqlite3_stmt.