I am using Oracle Data Integrator 11g, I have designed a package with 2 interfaces in Oracle Data Integrator. Both the interfaces are inserting some records into target table (Which is in Oracle).
These 2 interfaces are working fine. But i want to capture the record counts from both the interfaces and i have to load those counts into a new audit log table.
Is there any option to do that? If so, kindly reply with your answers.
The getPrevStepLog() method from the Substitution API allows to retrieve all kind of information on the previous step execution, including the number of rows inserted with the INSERT_COUNT parameter.
If you want to store that value in a variable, you can use this kind of refresh query for the variable and place it just after the interface in the package in refresh mode :
SELECT '<%=odiRef.getPrevStepLog("INSERT_COUNT")%>' FROM DUAL
Related
I am new to Cucumber for Java and trying to automate testing of a SpringBoot server backed by a MS SQL Server.
I have an endpoint "Get All Employees".
Writing the traditional feature file, I will have to list all the Employees in the #Then clause.
This is not possible with thousands of employees.
So I just want to get a row count of the Employee table in the database, and then compare with the number of objects returned from the "Get All Employees" endpoint.
Compare
SELECT count(*) from EMPLOYEE
with size of the list returned from
List<Employee> getAllEmployees()
But how does one save the rowcount in a variable in the feature file and then pass it into the stepdefs Java method?
I have not found any way that Gherkin allows this.
After writing a few scenario and feature files, I understood this about Cucumber and fixed the issue.
Gherkin/Cucumber is not a programming language. It is just a specification language. When keywords like Given, Then are reached by the interpreter, the matching methods in Java code are called. So they are just triggers.
These methods are part of a Java glue class. Data is not passed out of the Java class and into the gherkin feature file. The class is instantiated at the beginning and retained until the end. Because of this, it can store state.
So from my example in the question above, the Then response from a Spring endpoint call will be stored in a member variable in the glue class. The next Then invocation to verify the result will call the corresponding glue method which will access the data in the member variable to perform the comparison.
So Gherkin cannot do this, but Java at a lower level in the glue class, can.
You could create a package for example named dataRun (with the corresponding classes in the package) and save there the details during the test via setters.
During the execution of the step "And I get the count of employees from the data base" you set this count via the corresponding setter, during the step "And I get all employees" you set the number via the dedicated setter.
Then during the step "And I verify the number of employees is the same as the one in the data base" you get the two numbers via the getters and compare them.
Btw it's possible to compare the names of the employees (not just the count) if you put them into list and compare the lists.
I want to fetch the specified column from a data source(i.e. exclude certain columns) because even I don't show the data in UI, the data is loaded. (it is visible from Network tabs in Browser DevTools).
So far, I found the automatic data load and query builder/ query script provide the filters and sorting of the data but loads all the columns.
I have tried record owner security and role based security but this will make the data inaccessible to some users.
You can try creating a Calculated SQL Model to limit columns being pulled.
Let's take an example where you have a Student model with Name and City as the only two fields. You can add a new Calculated SQL with this SQL query and only Name column would be pulled.
SELECT Name FROM Student
Please refer to the official guide here. Feel free to comment if you want some clarification!
Hope this helps.
We are working on a Frontend application, which adds new data to layers of GeoServer. The Frontend is using WFS-T Insert calls to add this data. We use views for these layers in GeoServer, to do some additional handling. The views we use are database views thus created on our Oracle database itself (e.g. we do not use the SQL Views of GeoServer). These layers based on views work all fine (solution applied with a disabled “primary key” for the view as described elsewhere on the internet).
The views we use contain an unique ID which is the unique ID of the “principal” table which is used in the view. For the unique creation of the ID’s for the “principal” table we chose to have the creation of this ID to be done by a sequence defined in Oracle. For using this sequence within GeoServer you can provide this “metadata” as indicated by the documentation: http://docs.geoserver.org/stable/en/user/data/database/primarykey.html
This solutions works fine, except when you use an (insert) trigger on the database view.
CREATE OR REPLACE TRIGGER OUR_VIEW_TRG
instead of insert or update or delete on vw_our_view
for each row…
If we perform a WFS-T insert call this results in the following exception:
org.geoserver.wfs.WFSTransactionException: Error performing insert:
Error inserting features Error performing insert: Error inserting
features Error inserting features ORA-22816: unsupported feature with
RETURNING clause
Without indicating GeoServer to use the sequence we get returned a feature ID, which will not correspond with our sequence numbering on the ID of the table (GeoServer simply returns the number of rows of the table, plus one). This results in an undesired situation where we have an incorrect ID at the Frontend after an WFS-T insert, only after a refresh of the browser the correct ID is being fetched.
Does anybody know if there is a way to make this work, e.g. for our customer changing the GeoServer code ourselves, and thus create our “own” version of GeoServer is no option. Stop using the views would imply extra WFS-T insert calls from our FrontEnd Application.
I created custom table in corda using QueryableState. e.g. IOUStates table.
I can able to see the custom information getting stored in this kind of table.
but i observed that if party A and Party B is doing the transaction then this
custom information gets stored at both the places , e.g. IOUStates
table getting created at nodeA ledger as well as nodeB's ledger.
and custom information is stored in partyA's and PartyB's ledger.
My Question is :-
If some Transaction is getting processed from PartyA's node , then
I want to store part of the transaction's data i.e. custom data ONLY at partyA's Ledger.* level . i.e. off-Ledger of partA only.
It should not be shared with partyB.
In simple case , how to store Only node specific off ledger custom data ?
Awaiting for some reply...
Thanks.
There's a number of ways to achieve this:
Don't use Corda at all! If the data is truly off-ledger then why are you using Corda? Instead, store it in a separate database. Of course you can "JOIN" it with on-ledger data if required, as the on-ledger data is stored in a SQL database.
Similar to point one except you can use the jdbcSession() functionality of the ServiceHub to create a custom table in the node's database. This table can easily be accessed from within your flows.
Create a ContractState object that only has one participant: the node that wants to store the data. I call this a "unilateral" state, i.e. a state that only one party ever stores.
Most importantly, if you don't want to share some data with a counter-party then it should never be disclosed inside a corda state object or attachment that another party might see. Instead:
inside your flows, you can use the data encapsulated within the shared state object (e.g. the IOU) to derive the private data
alternatively if the data is supplied when the flow begins then store the private data locally using one of the methods above
I am creating my first orchestration in Biztalk and am having trouble coming up with an efficient way to update a database (specifically, up to 3 different tables).
The user calls our service with an inbound message matching a schema which contains emplid (unique id) and then a bunch of name-value pairs (see source schema in this picture). The "name" corresponds to a column in a table (e.g. if the name is "employeename" it corresponds to the NAME column of the EMPLOYEE table). The value of course is the new value that the user wants that column to be updated to.
So they could pass in an update message which only applies to 1 table, 2 tables, or all 3, depending on the fields they want to update for the passed in employee.
I am currently approaching it as 3 separate updates with 3 table adapters (one for each table, one of which is pictured above) but Im having trouble working with the different cases of if they pass in updateValuePairs for all 3 tables, versus only one or only for two tables (the other queries still attempt to run and fail). Am I approaching this right? Or is there a better way to accomplish what I am trying to do?
i would try i different way in order to implement cleaner solution,
create a Store-Procedure that handle the logic to which table to go
than you will need only on mapping and one LOB adapter instead of the 3 you got now
over view solution
1.receive input in the orchestration
2.mapping the input to the Stored procedure generated schema
3.sending the mapped data to the DB/LOB adapter into the DB
here is a link that can help you (im assuming you use biztalk 2010):
How to use Oracle Stored Procedure