i am having my biztalk solution, uptill now i am able to do following thing
1) taken sql adapter as my source schmea i wanted node wise xml so i did xml auto,elements in my SP to its generate schmea nodewise
2) i am able to loop through all the nodes and checking condition in loop wioth decide shape now decide shape is executing perfactly,but now the issue comes i want to insert my current xml into table,from all the xml nodes i am getting single node's xml like following
<userDetails xmlns="http://SqlRowLooping"><userID>1</userID><fName>niladri</fName><lName>Roy</lName><department>it</department></userDetails>
now i have updateGram as well but i think it will accept data attibutewise,right now it is firing error saying cant find procedure userID,
help how to insert this in table,
how updategram will work..
thxs
Change the XML node to conform to updategram syntax, see MSDN
Related
I am running a following AEM Query SQL2 on CRXDE and it is successfully returning me nodes as per following given screenshot.
But I need data like column wise (jcr properties) like SQL table. Can anyone help me if it is possible.
You can't do this with CRXDE. It shows only the path of the most outer node, even if the query has multiple columns. This is especially limiting, if your query uses joins.
In your case I would recommend the Query Builder. It has a totally different syntax, but the JSON or XML result contains all data you need.
I don't know other tools. As AEM developer I usually write a quick & dirty servlet, and let it run on my local instance (with production content)
Query Builder Debugger
http://localhost:4502/libs/cq/search/content/querydebug.html
Example Query
path=/content/we-retail/language-masters/en/experience
property=sling:resourceType
property.value=weretail/components/content/image
p.hits=full
p.nodedepth=2
Resulting JSON Query
http://localhost:4502/bin/querybuilder.json?p.hits=full&p.nodedepth=2&path=%2fcontent%2fwe-retail%2flanguage-masters%2fen%2fexperience&property=sling%3aresourceType&property.value=weretail%2fcomponents%2fcontent%2fimage
http://localhost:4502/bin/querybuilder.json?p.hits=full&p.nodedepth=2&path=%2fcontent%2fwe-retail%2flanguage-masters%2fen%2fexperience&property=sling%3aresourceType&property.value=weretail%2fcomponents%2fcontent%2fimage
Documentation
https://docs.adobe.com/content/help/en/experience-manager-64/developing/platform/query-builder/querybuilder-api.html
In your case especially see: Refining What Is Returned
You will find much more with Google, as the Query Builder is pretty old in AEM/CQ.
I have a positional input flat file schema of the following kind.
<Employees>
<Employee>
<Data>
In mapping, I need to extract the strings on position basis to pass on to the target schema.
I have the following conditions -
If Data has 500 records, there should be 5 files of 100 records at the output location.
If Data has 522 records, there should be 6 files (5*100, 1*22 records) at the output location.
I have tried few suggestions from internet like
Setting “Allow Message Breakup At Infix Root” to “Yes” and setting maxoccurs to "100". This doesn't seem to be working. How to Debatch (Split) a Flat File using Flat File Schema ?
I'm also working on a custom receive pipeline component suggested at Split Flat Files into smaller files (on row count) using Custom Pipeline but I'm quite new to this so it's taking some time.
Please let me know if there is any simpler way of doing this, without implementing the custom pipeline component.
I'm following the approach to divide the input flat file into multiple small files as per condition and write at the receive location, then process the files with native flat file dissembler. Please correct me if there is a better approach.
You have two options:
Import the flat file to a SQL table using SSIS.
Parse the input file as one Message, then map to a Composite Operation to insert the records into a SQL table. You could use in Insert Updategram also.
After either 1 or 2, call a Stored Procedure to retrieve the Count and Order of messages you need.
A simple way for a flat file structure without writing custom C# code is to just use a Database table. Just insert the whole file as records into the table, and then have a Receive Location that polls for records in the batch size you want.
Another approach is called the Scatter Gather Pattern, in this case you do set the Occurs to 1 which will debatch into individual records, and you then have an Orchestration that re-assembles it into the batch size you want. You will have to read up about Correlations Sets to do this.
I am importing a file with 200+ records into a master table.
The BizTalk package only services one source, other packages service other sources
I am using strongly type stored procedures for all SQL CRUD
All records inside the file come from the same source
The file does not contain source name or source Id
I want to determine source from package hard coded value
The Master table contains records from several sources
Before import: delete inside Master table existing records from source
Unlike the file import, the delete statement happens once
DELETE FROM Master WHERE SourceID = #SourceID
The file import works, but how can I hard code the delete source ID?
In your delete transform (just above the send shape) you can set up a SourcID property for the outgoing message. You can then populate the message context with this SourceID. This sourceID can then be used in your delete statement.
If I understand correctly, you want to delete all existing records for the SourceID before inserting new ones?
If so, you need to have access to the SourceID value on the inbound message into the orchestration.
To do this, use property promotion.
You can either do this:
inside a pipline component configured on the receive port so that the property is available when the message arrives on the orchestration, or,
inside the orchestration, which will require you moving the construct shape for the InsertCSV message above the delete construct shape, and promoting the property within the contruct shape.
Of these options, the first one is probably the best option as assigning properties should ideally be done during message dissasembly.
Alternatively, you can use an xpath() call within an Expression shape to interrogate the message using xpath, and retrieve the value like that. This way you can avoid thinking about property promotion.
However, while quicker to implement, this approach is not best practise because it makes your orchestration very sensitive to changes in message schema.
I am having a kettle job and transformation.
Transformation will write the result set of a select sql into a csv file.
job will get the result file and mail it to the user.
I need to mail only if the file consists any data, else should not mail the result to user.
or how to find the result of a transformation is empty or not(is there any file size validator job entry available?).
I am not able to find any job entries for this kind of conditioning.
Thanks in advance.
You can use the Evaluate files metrics job step in the Conditions branch. Set your condition on the Advanced tab.
You can set your transformation to generate the file only if there is data and then use in your main job the File exists? step.
I was given a report today with a normal embedded data set (dataset1) and data source (datasource1) but the data set query is just a number: '1411'. The previous programer manually entered fields (not calculated fields) into the field tab.
When I click RUN, it works.
How is it populating the page without a proper query?
-There is only 1 tablix called (table1.) It also is pointing to dataset1.
-In Report Properties there is no VB code.
-RDL XML: Under dataset1's tag:
<DataSourceName>datasource1</DataSourceName>
<CommandText>=1411</CommandText>
I see no other SQL listed. Could there be something else on the server that's triggering it?
What sort of data source is "datasource1"?
If it's an RDBMS, check if there is a stored procedure or function in the database with the name "1411".
In SQL Server for example you could conceivably have a stored procedure called [1411] that returned a data set.
I'm assuming we are talking RDL (Report Definition Language). You might open this report with your favorite text editor and look at the CommandText XML tag to find the associated query. Hope that helps.